Code for the paper 'LMTformer: Facial Depression Recognition with Lightweight Multi-Scale Transformer from Videos'
You can directly execute the main.py scripts with your own dataset.
To proceed:
- Change the
loadandPathon line 30 and 31 ofmain.py.loadis thecsv_loadfolder in root directory,Pathis your AVEC datasets. - Change the
deviceon line 39 ofmain.pyto your own device - We've also passed the parameters of our trained model for you if you only want to test out model. You can use code:
model.load_state_dict(torch.load('best.pt',map_location='cuda:0'))to load our parameters for AVEC2013.
Before running, ensure the videos are preprocessed to extract the required images.
Kindly note that due to authorization constraints, we are unable to share the AVEC datasets here. Therefore, it is necessary for you to independently extract, crop, align and pretreat the facial data.
