Skip to content

Error when running inference_01_cls_token_raw_data_plotting #20

@ATP-BME

Description

@ATP-BME

RuntimeError Traceback (most recent call last)
Cell In[19], line 10
7 example1[variable_of_interest_col_name] = [example1[variable_of_interest_col_name]]
9 processed_example1 = preprocess_fmri(example1)
---> 10 encoder_output = model.vit(
11 signal_vectors=processed_example1["signal_vectors"],
12 xyz_vectors=processed_example1["xyz_vectors"],
13 output_attentions=True,
14 output_hidden_states=True
15 )

File ~/miniconda3/envs/brainlm/lib/python3.10/site-packages/torch/nn/modules/module.py:1751, in Module._wrapped_call_impl(self, *args, **kwargs)
1749 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1750 else:
-> 1751 return self._call_impl(*args, **kwargs)

File ~/miniconda3/envs/brainlm/lib/python3.10/site-packages/torch/nn/modules/module.py:1762, in Module._call_impl(self, *args, **kwargs)
1757 # If we don't have any hooks, we want to skip the rest of the logic in
1758 # this function, and just call forward.
1759 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1760 or _global_backward_pre_hooks or _global_backward_hooks
1761 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1762 return forward_call(*args, **kwargs)
1764 result = None
...
104 ) # --> [batch, num_voxels, num_patch_tokens, timepoint_patching_size]
105 signal_projection = self.signal_embedding_projection(reshaped_signal_vectors)
107 # Project xyz coordinates into spatial embedding

RuntimeError: shape '[1, 200, -1, 20]' is invalid for input of size 84800

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions