Skip to content

Integration with CaImAn for automatic ROI detection #46

@olivierdelree

Description

@olivierdelree

A good avenue for improvement is the automatic detection of ROIs after chunks have been motion corrected.

The idea would be to be able to plug CaImAn in between the motion correction and the ROI drawing (potential skipping the latter) to reduce manual labour as much as possible.

Although CaImAn is quite complex, the portion we're interested in only requires a few steps. Assuming a file has been motion corrected, the script would boil down to:

import caiman as cm
import h5py
import numpy as np
from caiman.source_extraction import cnmf

path = "<PATH_TO_HDF5_FILE_TO_USE>"

# Load the motion-corrected stack from the file (convert to float as OpenCV requires it)
images = h5py.File(path)["/preprocessing/motion_correction/imaging"][:].astype(float)

# Save as a memmapped file so that the CNMF model can do parallel computations
file_name = cm.save_memmap([images], "drimmy", order="C")

# Load the file back up as a memmapped file
yr, dims, num_frames = cm.load_memmap(file_name)
images = np.reshape(yr.T, [num_frames, *dims], order="F")

# Set up the parameters for the CFNM (see the main demo on the CaImAn GitHub)
parameters_dict = ...
parameters = cm_cnmf.params.CNMFParams(params_dict=parameter_dict)

# Create the CFNM model and fit the stack
cnmf_model = cm_cnmf.CNMF(n_processes, params=parameters)
cnmf_model.fit(images)

# Improve the fitting now that it is seeded
cnmf_refit = cnmf_model.refit(images)

# Filter contours based on quality metrics defined in the parameters
cnmf_refit.estimates.evaluate_components(images, cnmf_refit.params)

Using the above snippet, we can then use the cnmf_refit.estimates.A array to get the component masks and get the ROIs with OpenCV by looking for contours (might be a CaImAn function to get the ROIs directly as a list of coordinates).

The main problem is that although CaImAn advertises it is compatible with Python 3.12 (and should therefore be compatible with us), it currently only supports it when using Anaconda. There is a conflict between the tensorflow and Python versions. I am assuming the Anaconda channels provide a tensorflow >=2.4.0,<2.16 compatible with Python 3.12 but PyPI only has tensorflow>=2.16 compatible with 3.12. And using tensorflow>=2.16 requires using keras>=3 which prevents the use of the pre-trained models provided by CaImAn (i.e., the evaluate_components call errors out).

This either means we need them to re-train their models with a newer version of tensorflow, or we need to have it be a separate part of the pipeline that requires creating a conda environment to run the ROI detection (probably through a couple of notebooks, one for tuning parameters and one to be an extension of the pipeline).

Although automatic detection of ROIs is not a high priority due to the small amount of cells in the FOVs, it might be nice to have a look at it in the coming months.

Metadata

Metadata

Assignees

No one assigned

    Labels

    ROI drawingIssue relates to the ROI drawing stepenhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions