This contains the code used for training and quantizing compressed MBV1 on DVSGesture128.
├──lib
│ ├──models --> contains the float, quantized and inference models
│ ├──quantization_utils --> contains the quantization modules
│ ├──utils --> arguments parsing and other utilities
│ ├──quantize.py --> wrapper for quantization
│ ├──train.py --> wrapper for training
│ └──run.py --> base runner module for training/quantization
├──inference.py --> runs the real integer inference
├──main.py --> script starting point
├──save.py --> saves the quantized weights, biases and scales to CSVs
└──save_model_stat.py --> saves the model architecture details as a CSV
Refer lib/utils/utils.py for all argument details.
- --data_dir: the root directory where the data is located.
- --proj: the name of the experiment.
- --model: the model to train.
- --train_epochs: training epochs.
- --QAT_epochs: quantization aware training epochs.
- --gpu: GPU index of GPU to be used - {0, 1, 2}.
- Use --train for training and --quantize for quantizing the model.
- brevitas_model_16K: 2-channel model with 5 DS layers.
- brevitas_model_19K: 8-channel model with 5 DS layers.
- brevitas_model_70K: 2-channel model with 7 DS layers.
- brevitas_model_71K: 8-channel model with 7 DS layers.
- brevitas_quant_model: The model for QAT(makes changes according to float model).
- brevitas_inference_model: The model to be used during real integer inferece.
conda env create -f environment.yml
Before training:
- Set --data_dir in utils.py or through command line
- Change file names in main.py
python main.py --train --proj {proj_name} --model {16K, 19K, 70K, 71K} --data_dir {root data directory}
python main.py --quantize --proj {proj_name} --model {16K, 19K, 70K, 71K}
python save.py --proj {proj_name}
python save_model_stat.py --proj {proj_name} --model {16K, 19K, 70K, 71K}
Change root data directory (--data_dir).
Change input file names in get_dataset() in inference.py
python inference.py --proj {proj_name} --model {16K, 19K, 70K, 71K} --data_dir {root data directory}