The introduction, model architecture, implementation details, experiments, and results are presented in the wandb report.
To get started install the requirements
pip install -r ./requirements.txtThen download train data
sudo apt install axel
bash loader.shThis project implements FastSpeech2 architecture for Text2Speech task.
To train model from scratch run
python3 train.py -c tts/configs/train.jsonFor fine-tuning pretrained model from checkpoint, --resume parameter is applied.
For example, continuing training model with train.json config organized as follows
python3 train.py -c tts/configs/train.json -r saved/models/1_initial/<run_id>/model_best.pthBefore applying model pretrained checkpoint is loaded by python code
import gdown
gdown.download("https://drive.google.com/uc?id=1NvQ-TpAdKKEsNIdkdwWHxITzAfEXdMG6", "default_test_model/checkpoint.pth")Model evaluation is executed by command
python3 test.py \
-i default_test_model/text.txt \
-r default_test_model/checkpoint.pth \
-w waveglow/pretrained_model/waveglow_256channels.pt \
-o output \
-l False-i(--input-text) provide the path to input.txtfile with texts. The file is readed by rows.-r(--resume) provide the path to model checkpoint. Note that config file is expected to be in the same dir with nameconfig.json.-w(--waveglow-path) provide the path to pretrainedWaveGlowmodel.-o(--output) specify output directory path, where.wavfiles will be saved.-l(--log-wandb) determine log results to wandb project or not. IfTrue, authorization in command line is needed. Name of project can be changed in the config file.
Running with default parameters
python3 test.pyThe model supports applying different coefficients of audio length, pitch, and energy.
For the inference stage they are taken from {0.8, 1.0, 1.2}, results will be written as follows
f"{row_number_in_txt_file_starting_from_1}-{length_level}-{pitch_level}-{energy_level}.wav"Examples of model evaluation are also presented in report.
Note: WaveGlow model from glow.py supports only GPU by default. To run the code on CPU, remove all cuda tensors from this file with the command
sed -i `s/torch.cuda/torch/g` glow.pyGoing back to CUDA version
cp FastSpeech/glow.py .The code of model is based on a notebook.