Skip to content

Memory corruption #58

@roemerseb

Description

@roemerseb

When running the map based heterogeneous model with fMRI data from one of my own subjects, the job fails at an early step due to memory issues (see picture). I tried to limit the cmaes size by setting the popsize =5 and n_iter = 5, but the issue stays the same. I tried to run the code both using singularity or directly downloading cubnm in an environment, which didn't change the result. The job failed when hitting 250 GB memory.
I would be very grateful for any ideas how to tackle the issue. Any ideas what could be wrong about my input, that causes this excessive need for memory and eventual fail?

Image

this is the FC matrix I calculated and wanted to feed, as well as the derived FCD
code used:
FC:
emp_fc_tril = utils.calculate_fc(emp_bold_zscored, exc_interhemispheric=True, return_tril=True)

FCD:
emp_fcd_tril= utils.calculate_fcd(
emp_bold,
window_size=window_size,
window_step=window_step,
exc_interhemispheric=True,
return_tril=True
)

FC.csv
FCD.csv

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions