Skip to content

Training Improvements, LR Scheduler, GPU Fixes, and PyPI Package#93

Open
JinruG wants to merge 8 commits intoGhadjeres:masterfrom
JinruG:master
Open

Training Improvements, LR Scheduler, GPU Fixes, and PyPI Package#93
JinruG wants to merge 8 commits intoGhadjeres:masterfrom
JinruG:master

Conversation

@JinruG
Copy link

@JinruG JinruG commented Mar 23, 2026

Training Improvements

  • cuda() called before train() : The model previously was moved to GPU only after training, causing the entire training loop to run on CPU.
  • Best-val-loss checkpointing : Each voice model now saves only the checkpoint that achieved the lowest validation loss, rather than always overwriting with the final epoch.
  • ReduceLROnPlateau scheduler : Learning rate is automatically reduced when validation loss plateaus, controlled by the new --lr_patience and --lr_factor options above.

Dependency Compatibility

  • Replaced torch.load(...) bare calls with explicit weights_only=True / weights_only=False keyword to silence the FutureWarning introduced in PyTorch 2.0 and enforced as an error in PyTorch 2.4+.
  • standalone_mode in the __main__ block is now set to False so that Click returns the result value rather than calling sys.exit(0) on success — required for embedding the entry point in notebooks or test harnesses without process termination.

deepbach_pytorch PyPI Package

https://pypi.org/manage/project/deepbach-pytorch/release/0.3.6/

A test script (test.py) is included that exercises the public API entry point:

test.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant