Quantization code is never called since it is commented. Binary convolution and activation is already implemented. Do I need to uncomment them (a few of them are given below)? During the forward pass I always quantize the weights and activation. Why do I need "quantization.py" ?
# if args.quantize:
# global bin_op
# bin_op = quantization.Binarize(model)
# if args.quantize:
# bin_op.restore()
# bin_op.updateBinaryGradWeight()
if I uncomment them, I get the following error.
File "main2.py", line 88, in main
bin_op = quantization.Binarize(model)
File "/media/mmrg/inci/files/quantization.py", line 33, in __init__
param_grp_1.append(m.conv.weight)
File "/home/mmrg/anaconda3/envs/pytorch_mmdet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'BinConv2d' object has no attribute 'conv'
Quantization code is never called since it is commented. Binary convolution and activation is already implemented. Do I need to uncomment them (a few of them are given below)? During the forward pass I always quantize the weights and activation. Why do I need "quantization.py" ?
if I uncomment them, I get the following error.