Skip to content

Select number of jobs for fitting #8

@berkgercek

Description

@berkgercek

One excellent way to make analysis of large numbers of single units feasible is to scale the number of concurrent jobs being run. This can be done on the cluster, by spawning new jobs for each recording unit or each recording session, but it would be nice to implement a version at the single-node or single-machine level.

In practice this means:

  1. Having a n_jobs or similar argument to the .fit() method of the GLM classes
  2. Incorporating a multi-processing package into the backend of neurencoding, probably joblib or something similar if one wants to have a flexible choice of what the individual processes are (e.g. joblib's "dask" backend mode)

This is far less urgent than other improvements, but would be a good quality-of-life feature. Could be either part of a 1.1 release or wait until the 2.0 release. Only will be part of 1.0 if it is very easy to implement.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requestlong term ideasDifficult, large-scope ideas for future releases

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions