diff --git a/classification/.gitignore b/classification/.gitignore new file mode 100644 index 0000000..cec0d73 --- /dev/null +++ b/classification/.gitignore @@ -0,0 +1,4 @@ +env +classification/network/__pycache__ +classification/dataset/__pycache__ +model/* diff --git a/classification/README.md b/classification/README.md index 132f445..16b5b0d 100644 --- a/classification/README.md +++ b/classification/README.md @@ -5,15 +5,38 @@ The models were trained using the Face2Face face tracker, though the `detect_fro Note that we provide the trained models from our paper which have not been fine-tuned for general compressed videos. You can find our used models under [this link](http://kaldir.vc.in.tum.de:/FaceForensics/models/faceforensics++_models.zip). -Setup: +Setup (requires Python 3): +- Install `virtualenv`: + ```shell + pip install virtualenv + ``` +- Create a Python virtual environment: + ```shell + virtualenv env + ``` +- Activate virtual environment: + 1. Windows: + ```shell + cd venv\Scripts + activate + cd ..\.. + ``` + 2. Lunix / Mac: + ```shell + source venv/bin/activate + ``` - Install required modules via `requirement.txt` file +- Download pre-trained models ([`wget`](http://gnuwin32.sourceforge.net/packages/wget.htm) allows easier download) + ```shell + wget -O model/xception-b5690688.pth http://data.lip6.fr/cadene/pretrainedmodels/xception-b5690688.pth + ``` - Run detection from a single video file or folder with -```shell -python detect_from_video.py --i --m + -m