Releases: sdatkinson/NeuralAmpModelerCore
Releases · sdatkinson/NeuralAmpModelerCore
Version 0.4.0
What's Changed
- Adding activation functions and fast LUT implementation by @jfsantos in #177
- Added multichannel PReLU by @jfsantos in #179
- Added gating activation classes by @jfsantos in #180
- Benchmarking report by @sdatkinson in #182
- [BREAKING] Conv1D manages its own ring buffer by @sdatkinson in #181
- [FEATURE] Grouped convolutions by @sdatkinson in #183
- [FEATURE] Grouped convolutions for
Conv1x1, WaveNetgroups_1x1hyperparameter by @sdatkinson in #184 - [FEATURE] bottlenecks in WaveNet layers by @sdatkinson in #185
- [FEATURE] Support multi-input, multi-output models by @sdatkinson in #187
- Head 1x1 convolution by @jfsantos in #189
- [FEATURE] Optionally process WaveNet conditions with another WaveNet by @sdatkinson in #190
- [FEATURE] Integrate gating & blending activations into WaveNet by @sdatkinson in #193
- Configurable activations by @jfsantos in #194
- [FEATURE] FiLMs in
wavenet::Layerby @sdatkinson in #196 - Fix bugs, add an end-to-end test with a model with all new features by @sdatkinson in #198
- Add documentation by @sdatkinson in #200
- Bump .nam file supported to 0.6.0 by @sdatkinson in #203
- [FEATURE] Softsign activation by @sdatkinson in #205
- [FEATURE] WaveNet: Allow different activations, gating modes, and secondary activations in each layer of a layer array by @sdatkinson in #207
- Refine WaveNet constructors by @sdatkinson in #208
- Add features to
wavenet_a2_max.namby @sdatkinson in #209 - [FEATURE] Grouped 1x1 convolutions in FiLM modules by @sdatkinson in #211
- [FEATURE] WaveNet: Make
layer1x1(formerly1x1) optional, rename.namkey"head_1x1"to"head1x1"by @sdatkinson in #214 - [BUGFIX] Fix performance hit for grouped convolutions by @sdatkinson in #216
- [ENHANCEMENT] Optimized depthwise convolutions by @sdatkinson in #217
Known issues
Grouped convolutions are slower than they should be except in the depthwise case (#215)
New Contributors
Full Changelog: v0.3.0...v0.4.0.rc2
Version 0.4.0
This is a release candidate. It may not be stable yet.
What's Changed
- Adding activation functions and fast LUT implementation by @jfsantos in #177
- Added multichannel PReLU by @jfsantos in #179
- Added gating activation classes by @jfsantos in #180
- Benchmarking report by @sdatkinson in #182
- [BREAKING] Conv1D manages its own ring buffer by @sdatkinson in #181
- [FEATURE] Grouped convolutions by @sdatkinson in #183
- [FEATURE] Grouped convolutions for
Conv1x1, WaveNetgroups_1x1hyperparameter by @sdatkinson in #184 - [FEATURE] bottlenecks in WaveNet layers by @sdatkinson in #185
- [FEATURE] Support multi-input, multi-output models by @sdatkinson in #187
- Head 1x1 convolution by @jfsantos in #189
- [FEATURE] Optionally process WaveNet conditions with another WaveNet by @sdatkinson in #190
- [FEATURE] Integrate gating & blending activations into WaveNet by @sdatkinson in #193
- Configurable activations by @jfsantos in #194
- [FEATURE] FiLMs in
wavenet::Layerby @sdatkinson in #196 - Fix bugs, add an end-to-end test with a model with all new features by @sdatkinson in #198
New Contributors
Full Changelog: v0.3.0...v0.4.0.rc1
Thanks to TONE3000 for supporting the development of this release!
Version 0.3.0
What's Changed
- [BUGFIX] Fix some wrongly-private attributes in WaveNet by @sdatkinson in #139
- [BUGFIX] Eliminate real-time allocations in WaveNet by @sdatkinson in #141
- Update
nlohmann/jsonto Version 3.12.0 by @sdatkinson in #152 - Fix the build by @sdatkinson in #153
- Update build to use C++20 by @sdatkinson in #155
- [FEATURE] Ability to register new factories into
get_dsp()by @sdatkinson in #156 - [FEATURE] Exponse some additional attributes by @sdatkinson in #162
- Add build status badge to README by @Khalian in #163
New Contributors
Full Changelog: v0.2.0...v0.2.1
Version 0.2.0
What's Changed
- [BUGFIX] Fix gated activation code by @sdatkinson in #102
- Bug fix renaming param in header to implementation by @dhilanpatel26 in #106
- simplify vector load from json by @shaforostoff in #105
- Fix wavenet head check by @mikeoliphant in #108
- Remove
nam::DSP::finalize_()by @sdatkinson in #110 - Define
nam::DSP::Resetandnam::DSP::ResetAndPrewarmby @sdatkinson in #111 - More efficient pre-warming using multiple-sample buffers by @sdatkinson in #112
- [BREAKING] Remove
config_pathas input toGetWeightsby @sdatkinson in #119 - CI: Test loading and running models by @sdatkinson in #120
- [FEATURE] Define input and output level calibration functionality for
DSPby @sdatkinson in #121 - [ENHANCEMENT]
get_dsp: Set input and output levels while loading models by @sdatkinson in #122 - [Chore] Formatting by @sdatkinson in #128
- [FEATURE] Add support for LeakyReLU activation by @sdatkinson in #127
- [BUGFIX] Handle when calibration fields are present but null-valued by @sdatkinson in #130
- [BUGFIX] Fix gated activations in WaveNet by @sdatkinson in #131
New Contributors
- @dhilanpatel26 made their first contribution in #106
- @shaforostoff made their first contribution in #105
Full Changelog: v0.1.0...v0.2.0
Version 0.1.0
Version 0.1.0
What's Changed
- Pre-warm WaveNet on creation over the size of the receptive field by @mikeoliphant in #71
- [BREAKING] Remove
dsp/by @sdatkinson in #75 - [BREAKING ]Processing interface cleanup by @mikeoliphant in #78
- [BREAKING] Remove _process_core() and output normalization by @mikeoliphant in #80
- [BREAKING] Remove
TARGET_DSP_LOUDNESSby @sdatkinson in #85 - [BREAKING] Remove constructors with loudness by @sdatkinson in #87
- [BUGFIX] Fix LSTM input-output reversal by @sdatkinson in #92
- Move pre-warm to DSP and call it in get_dsp() by @mikeoliphant in #90
- [BREAKING] Add
namnamespace by @sdatkinson in #93 - [BREAKING] Remove parametric modeling code by @sdatkinson in #95
Full Changelog: v0.0.1...v0.1.0-rc.1
Version 0.0.1
What's Changed
- Added Hardtanh activation function by @mikeoliphant in #14
- Separate core NAM code from other plugin dsp code by @sdatkinson in #21
- Remove iplug2 dependency by @sdatkinson in #22
- Handle PAD chunk by @sdatkinson in #20
- Output loudness normalization by @sdatkinson in #18
- Removed unused numpy code by @mikeoliphant in #23
- fabs->fabsf in fast tanh by @mikeoliphant in #24
- Formatting by @sdatkinson in #25
- Separate out convnet by @mikeoliphant in #26
- Activation function refactor by @mikeoliphant in #28
- Method to determine whether model has loudness data by @pawelKapl in #33
- Support loading WAV files with "fact" chunk by @sdatkinson in #37
- Activation constructor/destructor cleanup by @mikeoliphant in #32
- Read WAV files via allow-list of chunks by @sdatkinson in #39
- Support all v0.5 models regardless of patch version by @sdatkinson in #41
- Add wav error message string function to dsp::wav by @olilarkin in #42
- Define WaveNet and LSTM destructors by @sdatkinson in #44
- dspStruct pull request by @masqutti in #46
- Add fast tanh and fast sigmoid to LSTM by @mikeoliphant in #43
- Add CMake build tools by @mikeoliphant in #34
- Fix minor incompatibilities with clang/g++ build on Linux by @daleonov in #50
- Fix memory issues by @sdatkinson in #55
- Fix compiler warnings by @falkTX in #53
- Implement expected sample rate for DSP class by @sdatkinson in #60
- Allow NAM_SAMPLE_FLOAT to switch model input to float instead of double by @mikeoliphant in #48
- DSPs prepared for loading persisted models by @pawelKapl in #35
- Fix some wav impulse files would sometimes fail to load ... by @fab672000 in #63
- High-pass and low-pass filters by @sdatkinson in #66
- ImpulseResponse::GetSampleRate by @sdatkinson in #70
- Allow defining DSP_SAMPLE_FLOAT to have dsp classes use float I/O by @mikeoliphant in #68
- Virtual destructor to dsp::DSP by @sdatkinson in #72
- Fix some buffers having uninitialized contents after resizing by @daleonov in #51
New Contributors
- @mikeoliphant made their first contribution in #14
- @pawelKapl made their first contribution in #33
- @olilarkin made their first contribution in #42
- @masqutti made their first contribution in #46
- @daleonov made their first contribution in #50
- @falkTX made their first contribution in #53
- @fab672000 made their first contribution in #63
Full Changelog: v0.0.0...v0.0.1
Version 0.0.0
Core library as of NeuralAmpModelerPlugin v0.7.1
What's Changed
- Support loading IEEE 32-bit WAV files by @sdatkinson in #9
- Reduce gain of IRs by @sdatkinson in #11
Full Changelog: https://github.com/sdatkinson/NeuralAmpModelerCore/commits/v0.0.0