Skip to content

Add transformer based fine tuning with AMD setup#6

Merged
Rebreda merged 11 commits intomainfrom
transformer-fine-tuning
Mar 7, 2026
Merged

Add transformer based fine tuning with AMD setup#6
Rebreda merged 11 commits intomainfrom
transformer-fine-tuning

Conversation

@Rebreda
Copy link
Owner

@Rebreda Rebreda commented Mar 7, 2026

With this new process, you can now take your listenr-created datasets and create LoRA weights based on any ASR model.

What's included:

  • listenr-finetune CLI command to train LoRA adapters on top of any Whisper-compatible model
    • Reads HuggingFace datasets produced by listenr-build-dataset --format hf
  • Only saves the lightweight adapter weights — the base model is untouched
  • AMD ROCm support via --bf16 flag
  • Docker setup for reproducible training environments/less dependency hell
  • --dry-run mode to validate data and model config before committing to a full run

@Rebreda Rebreda merged commit 6763759 into main Mar 7, 2026
2 checks passed
@Rebreda Rebreda deleted the transformer-fine-tuning branch March 7, 2026 14:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant