Skip to content

Alexander4127/tts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Text2Speech project

Report

The introduction, model architecture, implementation details, experiments, and results are presented in the wandb report.

Installation guide

To get started install the requirements

pip install -r ./requirements.txt

Then download train data

sudo apt install axel
bash loader.sh

Model training

This project implements FastSpeech2 architecture for Text2Speech task.

To train model from scratch run

python3 train.py -c tts/configs/train.json

For fine-tuning pretrained model from checkpoint, --resume parameter is applied. For example, continuing training model with train.json config organized as follows

python3 train.py -c tts/configs/train.json -r saved/models/1_initial/<run_id>/model_best.pth

Inference stage

Before applying model pretrained checkpoint is loaded by python code

import gdown
gdown.download("https://drive.google.com/uc?id=1NvQ-TpAdKKEsNIdkdwWHxITzAfEXdMG6", "default_test_model/checkpoint.pth")

Model evaluation is executed by command

python3 test.py \
   -i default_test_model/text.txt \
   -r default_test_model/checkpoint.pth \
   -w waveglow/pretrained_model/waveglow_256channels.pt \
   -o output \
   -l False
  • -i (--input-text) provide the path to input .txt file with texts. The file is readed by rows.
  • -r (--resume) provide the path to model checkpoint. Note that config file is expected to be in the same dir with name config.json.
  • -w (--waveglow-path) provide the path to pretrained WaveGlow model.
  • -o (--output) specify output directory path, where .wav files will be saved.
  • -l (--log-wandb) determine log results to wandb project or not. If True, authorization in command line is needed. Name of project can be changed in the config file.

Running with default parameters

python3 test.py

The model supports applying different coefficients of audio length, pitch, and energy. For the inference stage they are taken from {0.8, 1.0, 1.2}, results will be written as follows

f"{row_number_in_txt_file_starting_from_1}-{length_level}-{pitch_level}-{energy_level}.wav"

Examples of model evaluation are also presented in report.

Note: WaveGlow model from glow.py supports only GPU by default. To run the code on CPU, remove all cuda tensors from this file with the command

sed -i `s/torch.cuda/torch/g` glow.py

Going back to CUDA version

cp FastSpeech/glow.py .

Credits

The code of model is based on a notebook.

About

Text2Speech based on FastSpeech2

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published