Skip to content
This repository was archived by the owner on Aug 2, 2025. It is now read-only.

Releases: codename0og/codename-rvc-fork-3

Codename-RVC-Fork-V3.2.0

01 Jul 23:15

Choose a tag to compare

3.2.0
| 24.06.2025 |

Changelog;

v3.2.0 update and new features.

  • Implemented validation mechanism. ( Hold-Out type )

  • Added experimental / unstable features: silence-aware fm loss, uncertainty-weighted loss balancer, WGAN-GP styled losses for gen and disc.
    ( Only toggleable from within the training script (( Check the ' Globals ( tweakable ) ' section if you're interested )) )

  • The way L1 mel loss works is now aligned with how RVC does it.

Notes:

  • WGAN and WGAN-GP, according to my limited tests isn't anything great but ig, you can experiment on your own.

  • Uncertainty loss balancer is a hit or miss. Might work for you or might not. Worth testing if you want ~

  • Silence aware fm loss is a thing I barely tested and when I did, the mask was set as 0.01, now it's 0.05 which might be somewhat better. You can test it if you want.

  • I drop any form of support for linux and mac users. I don't feel like building a linux / mac wheel for PESQ.
    However, if someone's willing to do it ( Has to work on py 3.10.X and with numpy 1.26.4 ), feel free to pr on:
    https://github.com/codename0og/codename-essentials
    PESQ sourced from:
    https://github.com/schmiph2/pysepm

  • Fork's version 3.2.0 requires a full re-install. ( Yeet the env folder, apply release's patch, use the run-install.bat )

Codename-RVC-Fork-V3.1.7-rev1

26 Jun 15:23

Choose a tag to compare

3.1.7-rev1
| 24.06.2025 |

Changelog;

This is a revision of bfloat16;

  • BrainFloat16 amp training is enabled by default. Is forced off only on unsupported hardware ( refer to changelong from v3.1.7 release )

  • Reworked training loop

  • Dropped usage of gradscaler ( most likely unnecessary for bf16 )

  • F0 and Feature extraction is done in FP32 regardless of training mode ( FP32, TF32, BF16 amp )

  • Inference and during-training-preview is done in FP32 regardless of training mode ( FP32, TF32, BF16 amp )
    ( This one is specifically crucial. After tests I concluded it wasn't ideal at all to infer in bf16 ~ mirroring fp16's usage was a wrong move. )

  • Got rid of some unnecessary downcasting

Codename-RVC-Fork-V3.1.7

24 Jun 00:31

Choose a tag to compare

3.1.7
| 24.06.2025 |

Changelog;

Added ability to train in brainfloat16 precision ( bf16 )
You can change it in settings tab ( make sure to use the restart button after the change. )

bf16 models are suggested to be inferenced in fp32:
After you train a bf16 model ( in bf16 mode ) simply switch applio to fp32.

brainfloat16 has worse precision than fp32 but is more stable ( has bigger range ) than fp16 ( not used anymore. )
However decreases the vram consumption + grants a roughly 50% speedup.
Is it worth it? Not sure. That's up to you to decide.

If precision / accuracy is your thing then simply use TF32.
Sure, you don't get vram benefits like you would with bf16 ( simply put, same as for fp32 ),
but your speed is still roughly doubled.

KEY NOTES:

  • bf16 handling for both training and inference differs from applio - mimicking fp16 behavior ( At least as of now. )

  • bf16 is only supported on RTX 30xx series / ampere arch and higher.

  • bf16 is not supported on cpus. If you train on cpu, fp32 gonna be enforced.

  • tf32 is only supported on RTX 30xx series / ampere arch and higher.
    ( TensorFloat32 been available here for a while but still wanted to make it clear. )

  • fp32 is still a default precision.

Codename-RVC-Fork-V3.1.6-rev2

17 May 05:54
fd2f52f

Choose a tag to compare

3.1.6-rev2 update
| 17.05.2025 |

Changelog;

Big logging upgrade + minor changes

  • I decided to ditch " avg loss over 5 epochs " logging for the sake of " avg_50 " ( over 50 steps - well known from applio already ) adapted to my liking ).

  • avg-epoch loss logging stays the way it is, except now it's being automatically disabled ifyou train from scratch / without pretrains.
    ( In that situation, avg_50 gonna fully take over so, don't worry. )

  • minor changes such as: Changing default saving frequency from 10 to 1 + boosting it's max limit to 5k ( dev purposes )

  • Already fixed but worth mentioning; mrf-hifigan is now properly functioning. ( still needs the right pretrains! )

Codename-RVC-Fork-V3.1.6-rev1

16 May 05:47

Choose a tag to compare

3.1.6-rev1 update
| 16.05.2025 |

Changelog;

Crucial upodate that's gonna affect the training.

  • Added double-update strategy option ( Applies to Discriminator ).
    This should make discriminator actually useful and improve the adversarial aspect of the training by a ton.
    Enabled by default. ( you can disable it if you want. In the ui, it's called " Double-update strategy for Discriminator " )

Codename-RVC-Fork-V3.1.6

15 May 03:25

Choose a tag to compare

3.1.6 update
| 15.05.2025 |

Changelog;

QoL update and few changes

  • Few new switches integrated in the ui ( cuDNN benchmark, cuDNN deterministic, TF32 precision )
  • Changed the limit of warmup epochs from 100 to 1000.
  • Some formatting / cleanup changes in the training script.

Codename-RVC-Fork-V3.1.5-rev1

13 May 00:17

Choose a tag to compare

3.1.5-rev1 update
| 13.05.2025 |

Changelog;

  • clipping values back to '999999' ( Meaning it's off. Feel free to experiment. )

  • reverted lr decay gamma to stock / default one for applio and rvc ( Need more testing on each optim. wip. )

  • Added support for custom reference-samples for evaluation on fly.
    ( I'll propably enhance it in some time, making it more user-friendly. For now one has to prepare the files for reference as you normally would during preprocessing / F0-Feature extraction and put them to 'logs/reference/'. (( max 10, maybe 20 secs for length + use the naming as seen in example ones in reference folder. ))

  • Experimental losses removed. ( need longer evaluation. )

  • Ranger25 is being replaced with original Ranger21. ( Need more tests on 25.. something fishy might be going on hm. )

  • Slowly getting rid of depending on i18n ( English will be the only supported language ) + fixing some weird translations.
    ( WIP )

===========
Collective Micro-patches note:

  • Removal of tm loss remainings
  • Dependency fix ( torch_optimizer )
  • Fixed few indention and formatting issues in mrf-hifigan generator
  • Fixed " fcpe " " FCPE " naming inconsistency causing fcpe to not work properly. ( Sorry! )

Codename-RVC-Fork-V3.1.5

11 May 00:49

Choose a tag to compare

3.1.5 update
| 11.05.2025 |
Changelog;

  • Removed dropout ( not effective )
  • Now you can choose in the ui which mel loss function to use ( Multi-scale and L1 ). (( thx to noobies for simplifying the process. ye, I borrowed it. ))
  • Same as above but for optimizers ( Ranger25, RAdam and AdamW ).
  • Added " sync-and-rollback.ps1 " for quick syncing / updating the fork;
    ( Backup and then Synchro of your local fork with repository + in case of issues, you can use it for rolling back. )
  • few new tools in extras

edit: Corrected the index error issue.

Codename-RVC-Fork-V3.1.4

07 May 05:58

Choose a tag to compare

  • Removal of applio's fcpe and fumiama rvc's fcpe ( " fumi_fcpe " ).
  • TorchFCPE becomes the main and only fcpe.
  • Filter radius now has an actual use. Check the note.
  • Added support for Dropout1d in nsf hifigan generator

NOTE:

  • TorchFCPE's current version is 0.0.4 however since the last release, some changes / updates were done ( on official repository ).
    Those changes are present here, in this fork's fcpe.
  • Filter Radius is now controlling the radius / 'threshold' for FCPE.
    More info on that in description you can read when you open the inference tab in fork. ( ps. it's only for fcpe. )
  • nsf hifigan now uses Dropout1d of which rate you can ( for now ) control through setting the 'dropout_value' global from within the training script.

I did some prior testing and stuff works but as always, in case of any issues, lemme know asap. Cheers.

Codename-RVC-Fork-V3.1.3-rev1

05 May 19:06

Choose a tag to compare

rev. 1 of v3.1.3:

  • fiixed f0 alignment in fcpe and fumi_fcpe
  • added shape verifier found in extras.