Implementation of Glow in PyTorch. Based on the paper:
Glow: Generative Flow with Invertible 1x1 Convolutions
Diederik P. Kingma, Prafulla Dhariwal
arXiv:1807.03039
Training script and hyperparameters designed to match the CIFAR-10 experiments described in Table 4 of the paper.
- Make sure you have Anaconda or Miniconda installed.
- Clone repo with
git clone https://github.com/chrischute/glow.git glow. - Go into the cloned repo:
cd glow. - Create the environment:
conda env create -f environment.yml. - Activate the environment:
source activate glow.
- Make sure you've created and activated the conda environment as described above.
- Run
python train.py -hto see options. - Run
python train.py [FLAGS]to train. E.g., runpython train.pyfor the default configuration, or runpython train.py --gpu_ids=0,1to run on 2 GPUs instead of the default of 1 GPU. This will also double the batch size. - At the end of each epoch, samples from the model will be saved to
samples/epoch_N.png, whereNis the epoch number.
A single epoch takes about 30 minutes with the default hyperparameters (K=32, L=3, C=512) on two 1080 Ti's.
More samples can be found in the samples folder.
| Epoch | Train | Valid |
|---|---|---|
| 10 | 3.64 | 3.63 |
| 20 | 3.51 | 3.56 |
| 30 | 3.46 | 3.53 |
| 40 | 3.43 | 3.51 |
| 50 | 3.42 | 3.50 |
| 60 | 3.40 | 3.51 |
| 70 | 3.39 | 3.49 |
| 80 | 3.38 | 3.49 |
As pointed out by AlexanderMath, you can use gradient checkpointing to reduce memory consumption in the coupling layers. If interested, see this issue.







