This guide walks you through installing spikedetect and running your first spike detection in under 5 minutes.
- Python 3.9 or later (check with
python --version) - pip package manager (check with
pip --version) - A recording file (
.mator.abf)
If you don't have Python yet, install Anaconda or Miniconda -- both include Python, pip, numpy, scipy, and matplotlib pre-installed.
# Clone the repository
git clone https://github.com/maxwellsdm1867/spikeDetection.git
# IMPORTANT: cd into the spikedetect/ subdirectory (where pyproject.toml lives)
cd spikeDetection/spikedetect
# Install the package (editable mode so you can modify if needed)
pip install -e .This automatically installs the three required dependencies:
- numpy (>= 1.24) -- array math
- scipy (>= 1.10) -- signal filtering, peak finding, .mat file loading
- matplotlib (>= 3.7) -- plotting and interactive GUIs
To verify the install worked:
python -c "import spikedetect; print(spikedetect.__version__)"
# Should print: 0.1.0Install these based on what you need:
# If you have .abf files or MATLAB v7.3 HDF5 .mat files:
pip install -e ".[io]"
# Installs: pyabf (ABF file reader) + h5py (HDF5/MATLAB v7.3 reader)
# If you want faster DTW on large datasets:
pip install -e ".[fast]"
# Installs: numba (JIT compiler) + dtaidistance
# NOTE: If numba causes warnings, it's safe to skip -- pure numpy works fine
# If you want to run the test suite:
pip install -e ".[dev]"
# Installs: pytest + pytest-cov + ruff
# All of the above at once:
pip install -e ".[all]"These are NOT required but useful in specific situations:
# To export spike times as a pandas DataFrame (result.to_dataframe()):
pip install pandas
# To use interactive GUIs in Jupyter notebooks:
pip install ipympl
# Then add %matplotlib widget at the top of your notebookimport spikedetect as sd
rec = sd.load_recording("path/to/trial.mat")
print(f"Loaded: {rec.name}")
print(f" {rec.n_samples} samples, {rec.sample_rate} Hz, {rec.duration:.2f} s")If the .mat file already has spike detection parameters and results from a previous MATLAB run, they are automatically loaded into rec.result.
rec = sd.load_abf("path/to/recording.abf")import numpy as np
voltage = np.load("my_voltage.npy") # or however you have your data
rec = sd.Recording(
name="experiment_001",
voltage=voltage,
sample_rate=50000, # Hz
)# Create default parameters for your sample rate
params = sd.SpikeDetectionParams.default(fs=rec.sample_rate)This gives you sensible starting values. The key parameters are:
| Parameter | Default | What it does |
|---|---|---|
hp_cutoff |
200 Hz | High-pass filter cutoff |
lp_cutoff |
800 Hz | Low-pass filter cutoff |
diff_order |
1 | Differentiation order (0, 1, or 2) |
polarity |
1 | Flip signal (+1 or -1) |
peak_threshold |
5.0 | Minimum peak height for candidates |
distance_threshold |
15.0 | Maximum DTW distance to accept a spike |
amplitude_threshold |
0.2 | Minimum spike amplitude |
You'll need a spike template before running detection. Either:
- Use the interactive GUI (Step 4a), or
- Provide one from a previous run (Step 4b)
The easiest way to get started is to run the included GUI workflow script. It walks you through all 4 interactive GUIs in order:
cd spikedetect
# With synthetic data (no file needed -- great for a first test)
python examples/gui_workflow.py
# With your own recording
python examples/gui_workflow.py path/to/trial.mat
# ABF files work too
python examples/gui_workflow.py path/to/recording.abfThe script (examples/gui_workflow.py) handles loading, filtering, template matching, and detection for you. Each GUI opens as a pop-up window where you interact, then press Enter to move to the next step.
Here is what each GUI does:
-
FilterGUI -- Adjust bandpass filter sliders (HP, LP, differentiation order, polarity, peak threshold). The top panel shows the raw trace with raster ticks; the bottom shows the filtered signal with detected peaks (red dots). Press Enter to accept.
-
TemplateSelectionGUI -- Click on 3-5 clear spike peaks in the filtered data to build a template. Selected peaks turn green and their waveforms appear in the bottom panel. Press Enter to accept.
-
ThresholdGUI -- A scatter plot of DTW distance vs. amplitude for all candidate peaks. Click to move the threshold lines. Press b to toggle between distance and amplitude thresholds. Waveform panels show accepted (blue) and rejected (yellow) spikes. Press Enter to accept.
-
SpotCheckGUI -- Step through each detected spike one by one. Press y to accept, n to reject, arrow keys to adjust spike position, Tab to skip. Click raster ticks or scatter dots to jump to a specific spike. Press Enter when done.
Using the GUIs from your own Python script (click to expand)
If you want to call the GUIs from your own code instead of using the example script:
import spikedetect as sd
from spikedetect.gui import FilterGUI, TemplateSelectionGUI, ThresholdGUI, SpotCheckGUI
from spikedetect.pipeline.filtering import SignalFilter
from spikedetect.pipeline.peaks import PeakFinder
from spikedetect.pipeline.template import TemplateMatcher
# Load your data
rec = sd.load_recording("path/to/trial.mat")
params = sd.SpikeDetectionParams.default(fs=rec.sample_rate)
# 1. Tune filter settings with live sliders
filter_gui = FilterGUI(rec.voltage, params)
params = filter_gui.run()
# 2. Filter the data (needed for template selection)
start_point = round(0.01 * rec.sample_rate)
unfiltered_data = rec.voltage[start_point:]
filtered_data = SignalFilter.filter_data(
unfiltered_data,
fs=params.fs,
hp_cutoff=params.hp_cutoff,
lp_cutoff=params.lp_cutoff,
diff_order=params.diff_order,
polarity=params.polarity,
)
# 3. Click on peaks to select seed spikes for the template
template_gui = TemplateSelectionGUI(filtered_data, params)
params.spike_template = template_gui.run()
# 4. Run initial detection
result = sd.detect_spikes(rec, params)
# 5. Build match_result for the ThresholdGUI scatter plot
spike_locs = PeakFinder.find_spike_locations(
filtered_data,
peak_threshold=params.peak_threshold,
fs=params.fs,
spike_template_width=params.spike_template_width,
)
match_result = TemplateMatcher.match(
spike_locs=spike_locs,
spike_template=params.spike_template,
filtered_data=filtered_data,
unfiltered_data=unfiltered_data,
spike_template_width=params.spike_template_width,
fs=params.fs,
)
# 6. Fine-tune distance/amplitude thresholds
threshold_gui = ThresholdGUI(match_result, params)
params = threshold_gui.run()
# 7. Re-run detection with updated thresholds
result = sd.detect_spikes(rec, params)
# 8. Review individual spikes (y/n/arrow keys)
spotcheck = SpotCheckGUI(rec, result)
result = spotcheck.run()
print(result.summary())If you have parameters from a previous run (e.g., loaded from the .mat file):
# Use the params that were loaded with the recording
params_from_file = rec.result.params # if the .mat had previous detection results
result = sd.detect_spikes(rec, params_from_file)Or set the template manually:
params.spike_template = my_template_array # 1-D numpy array
result = sd.detect_spikes(rec, params)# Quick summary
print(result.summary())
# Output:
# Spike Detection Result
# Spikes found: 296
# Time range: 0.162 - 7.882 s
# Mean ISI: 26.2 ms (range 5.3 - 112.4 ms)
# Mean firing rate: 38.2 Hz
# Spot-checked: no
# Spike times as sample indices (0-based)
print(result.spike_times[:10])
# Spike times in seconds
print(result.spike_times_seconds[:10])
# Number of spikes
print(result.n_spikes)
# Plot the recording with spike markers
rec.result = result
fig = rec.plot()df = result.to_dataframe()
print(df.head())
# spike_index spike_time_s spike_index_uncorrected
# 0 8127 0.162540 8130
# 1 10794 0.215880 10800
# ...df = result.to_dataframe()
df.to_csv("spike_times.csv", index=False)from spikedetect.io.config import save_params, load_params
# Save to ~/.spikedetect/experiment_001.json
save_params(params, "experiment_001")
# Load back later
params = load_params("experiment_001")from spikedetect.io.mat import save_result
rec.result = result
save_result("trial_with_spikes.mat", rec)from spikedetect.io.native import save_native, load_native
save_native("output.h5", rec)
rec_loaded = load_native("output.h5")To see what the pipeline is doing at each step:
import logging
logging.basicConfig(level=logging.INFO)
# Now detect_spikes will print progress:
# INFO:spikedetect.pipeline.detect:Starting spike detection on 'trial_001' (8.0 s, 50000 Hz)
# INFO:spikedetect.pipeline.detect:Filtering: hp=800 Hz, lp=160 Hz, diff_order=1, polarity=-1
# INFO:spikedetect.pipeline.detect:Found 342 candidate peaks
# INFO:spikedetect.pipeline.detect:Accepted 296 / 342 candidates (distance < 9.6, amplitude > -0.080)
# INFO:spikedetect.pipeline.detect:Detection complete: 296 spikes found in 'trial_001'"ModuleNotFoundError: No module named 'spikedetect'"
Make sure you installed from the spikedetect/ subdirectory (not the repo root):
cd spikeDetection/spikedetect # <-- this directory has pyproject.toml
pip install -e ."ModuleNotFoundError: No module named 'h5py'"
You're trying to load a MATLAB v7.3 HDF5 .mat file. Install the I/O extras:
pip install h5py>=3.8
# or install all I/O extras:
pip install -e ".[io]""ModuleNotFoundError: No module named 'pyabf'"
You're trying to load an ABF file. Install pyabf:
pip install pyabf>=2.3
# or install all I/O extras:
pip install -e ".[io]""ImportError: pandas is required for to_dataframe()"
Install pandas:
pip install pandasnumba/numpy version warning (_ARRAY_API not found or compiled using NumPy 1.x)
This is harmless -- the pipeline falls back to pure numpy automatically. To silence it:
# Option 1: Remove numba (simplest, no speed loss for most datasets)
pip uninstall numba
# Option 2: Make versions compatible
pip install "numpy<2"
# Option 3: Upgrade numba to support numpy 2
pip install numba --upgradeGUI windows don't appear (macOS + Anaconda)
If you run the GUI workflow and see terminal output ("Step 1: FilterGUI...") but no window pops up, this is a known issue with Anaconda's Python on macOS. Anaconda's Python is not a macOS "framework build", so the default macosx matplotlib backend cannot display windows properly. The fix is to switch to the Qt5Agg backend:
# One-time fix: set Qt5Agg as your default matplotlib backend
mkdir -p ~/.config/matplotlib && echo "backend: Qt5Agg" > ~/.config/matplotlib/matplotlibrcIf PyQt5 is not installed:
conda install pyqtThen re-run the GUI workflow -- windows will appear correctly. This is only needed on macOS with Anaconda; Windows and Linux are not affected.
Jupyter GUIs are not interactive / widgets don't respond
Install ipympl and use the widget backend:
pip install ipymplThen add this at the top of your notebook (before any imports):
%matplotlib widget"No spike template provided"
You need to set params.spike_template before calling detect_spikes(). Use the interactive TemplateSelectionGUI or provide a 1-D numpy array from a previous run.
"High-pass cutoff must be below the Nyquist frequency"
Your filter cutoff is too high for your sample rate. The cutoff must be less than sample_rate / 2. Use SpikeDetectionParams.default(fs=your_rate) to get auto-scaled defaults.
"Expected a Recording object, got ndarray"
You passed a raw numpy array instead of a Recording object. Wrap it:
rec = sd.Recording(name="my_data", voltage=my_array, sample_rate=50000)"Expected SpikeDetectionParams, got dict"
You passed a dict instead of a SpikeDetectionParams object. Create one:
params = sd.SpikeDetectionParams(fs=50000)
# or from a dict:
params = sd.SpikeDetectionParams.from_dict(my_dict)- Try lowering
params.peak_thresholdto find more candidates - Try increasing
params.distance_thresholdto accept more candidates - Try lowering
params.amplitude_threshold - Try
params.polarity = -1if your spikes go downward - Use the
FilterGUIto visually check that filtering reveals spikes
- Lower
params.distance_threshold(stricter template match) - Raise
params.amplitude_threshold - Use
SpotCheckGUIto manually review and reject bad detections
- Read the User Guide for detailed explanations of every parameter and pipeline stage
- See MIGRATION_GUIDE.md if you're coming from the MATLAB version
- Run
python -m pytest tests/ -vto verify your installation