-
Notifications
You must be signed in to change notification settings - Fork 0
Katherine proc spikes #704
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
@resatomo please be the primary reviewer, since you are most familiar with the parallel pipeline for neuropixels. @leoscholl please look over to catch any glaring issues and @ajm2605 it probably would be good if you familiarize yourself with the preproc pipeline! |
aopy/data/db.py
Outdated
| if 'record_headstage' in params and params['record_headstage']: | ||
| sources.append('broadband') | ||
| sources.append('lfp') | ||
| sources.append('ap') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you make this depend on the drive type?
| n_ch = 0 | ||
| while n_ch < n_channels: | ||
| broadband_chunk = bb_data[n_samples_read:n_samples_read+time_chunksize, n_ch:n_ch+channel_chunksize] | ||
| ap_chunk = precondition.filter_ap(broadband_chunk, samplerate, **filter_kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to check. This filter_ap doesn't downsample data, which is different from filter_lfp. Is this okay?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For spike detection, downsampling is not good and applying bandpass between 300-10000 is good. But for saving ap_data, you need to do bandpass between 300-5000 and downsampling to 10kHz in my understanding from previous meeting. I don't know how to deal with both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I talked to @leoscholl a while ago when writing this and we thought it's best to save ap_data with 300-10kHz, sampled at the original 25kHz. The reason is if you're analyzing AP band data, you probably want up to 10kHz and higher sampling rate. The downside is it takes up more storage. Maybe I need to bring it up to the group?
aopy/data/bmi3d.py
Outdated
| # Detect spikes below threshold (for extracellular recordings) | ||
| threshold = precondition.calc_spike_threshold(ap_data, high_threshold=False, rms_multiplier=rms_multiplier) | ||
| spike_times, spike_waveforms = precondition.detect_spikes_chunk(ap_data, samplerate, | ||
| threshold, chunksize, above_thresh=False, **detect_kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you try +/- threshold? Just using negative threshold may lose some units because waveforms changes depending on electrode positions. https://www.nature.com/articles/nrn3241
| spike_metadata = metadata | ||
| spike_metadata['spike_threshold'] = threshold | ||
| spike_metadata['refractory_violations_removed'] = True | ||
| spike_metadata['spike_pos'] = {chan_number: chan_number for chan_number in range(n_channels)} # channel identity of each unit (here, iunit=ichan) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if spike_pos is a good name because the variables you store are not positions.
aopy/data/bmi3d.py
Outdated
| times = np.array([]) | ||
| waveforms = np.array([]) | ||
| else: | ||
| times, idx = precondition.filter_spike_times_fast(spike_times[ichan]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's good to save refractory_period in metadata.
aopy/preproc/wrappers.py
Outdated
| aodata.save_hdf(result_dir, result_filename, spike_metadata, f'drive{idrive}/metadata', append=True) | ||
| if 'neuropixels' in files: | ||
| print(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I probably forgot to delete this line. Could you delete it?
|
Test functions for proc_ap and proc_spikes would be helpful. |
Preproc pipeline for ecube recordings w/spikes
These steps are wrapped in
proc_ap()andproc_spikes()