A biologically plausible vocoder for auditory perception modeling and cochlear implant simulation.
NeuroVoc is a flexible, biologically inspired vocoder that reconstructs audio signals from simulated auditory nerve activity. It is designed to support both normal hearing (NH) and electrical hearing (EH) models, allowing for a seamless comparison of auditory perception under different hearing conditions.
The diagram above illustrates the NeuroVoc processing pipeline:
- Sound — An input waveform (e.g., speech) is passed to an auditory model.
- Hearing Model — This model (e.g., normal hearing or cochlear implant simulation) transforms the sound into a neural representation.
- Neurogram — The output is a time–frequency matrix of spike counts, simulating auditory nerve activity.
- Decoder — The neurogram is then converted back into an acoustic waveform using an inverse short-time Fourier transform (STFT)-based decoder.
This modular flow enables the flexible substitution of different models or model parameters while maintaining a consistent reconstruction backend.
neurovoc/
├── neurovoc/ # Core vocoder framework (Python package)
├── experiments/ # Scripts for generating the figures from the paper
├── data/ # Din test data and paper data
├── tests/ # Unit tests
The main package can be found in the neurovoc folder. Experiments holds the notebooks that were used to generate the plots in the paper. The online Digits in Noise test platform, can be found in this repository.
You're absolutely right — numbering those steps suggests a sequence, but in this case, they’re two alternative ways to install the package. Here's a better structure:
If you want to modify or contribute to the codebase:
git clone https://github.com/jacobdenobel/neurovoc.git
cd neurovoc
pip install .If you just want to use the package:
pip install neurovocNeuroVoc provides a flexible CLI for simulation and vocoding. Once installed, you can use the neurovoc command. If you want to know more about a command, or see which options are available, add the --help flag. For example:
neurovoc generate bruce --helpThese commands take an audio waveform and convert it into a neurogram (neural spike representation):
neurovoc generate bruce input.wav output.pkl
neurovoc generate specres input.wav output.pkl
neurovoc generate ace input.wav output.pklEach model supports its own optional flags, like --n-fibers-per-bin, --n-mels, or --version for ACE.
Converts a saved neurogram back into an audio waveform using an inverse STFT-based decoder. Use options like --n-hop, --n-fft, or --target-sr to control reconstruction parameters.
neurovoc reconstruct output.pkl reconstructed.wavThese commands run a full simulation + reconstruction cycle in one go:
neurovoc vocode bruce input.wav output.wav
neurovoc vocode specres input.wav output.wav
neurovoc vocode ace input.wav output.wavAdd --plot to visualize original vs reconstructed signal.
Certainly! Here's the “Processing a Custom Neurogram” section you can directly paste into your README:
If you want to apply the NeuroVoc reconstruction logic to a neurogram generated by a method not included in this repository, you can wrap your custom matrix in a Neurogram object:
from neurovoc import Neurogram
neurogram = Neurogram(
dt=..., # float: time resolution of the neurogram (in seconds)
frequencies=..., # np.array (m, 1): frequency bins corresponding to rows
data=..., # np.array (m, t): matrix of normalized spike counts or neural activity
source=..., # str: label describing the source/method
)📌 Note: Currently, only mel-scale frequency bins are supported for decoding.
Once constructed, save it to disk with:
neurogram.save("my_custom_ng.pkl")Then reconstruct it using the CLI:
neurovoc reconstruct my_custom_ng.pkl reconstructed.wavThis makes it easy to plug external auditory models into the NeuroVoc decoding pipeline.
If you use NeuroVoc in your work, please cite the following:
@misc{denobel2025spikesspeechneurovoc,
title={From Spikes to Speech: NeuroVoc -- A Biologically Plausible Vocoder Framework for Auditory Perception and Cochlear Implant Simulation},
author={Jacob de Nobel and Jeroen J. Briaire and Thomas H. W. Baeck and Anna V. Kononova and Johan H. M. Frijns},
year={2025},
eprint={2506.03959},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2506.03959},
}For questions or feedback, contact nobeljpde1@liacs.leidenuniv.nl
Or open an issue in this repository.
This project is licensed under the MIT License. See the LICENSE file for details.
