All environment dependencies are included in the requirements.txt file.
To generate a conda environment from this file, run
conda create --name <env_name> --file requirements.txtTo activate the env
conda activate <env_name>To get the learning model
./get-modelTo run the project
python3 -m presencedetectorPresence detector currently runs in two modes as defined by mode.static in config/application.conf.
Takes an input image defined by conf value obj-detector.input-path. To get started this an image of a street in london.
The inference result will get dumped into file output.jpg, defined by conf value obj-detector.output-path.static
Takes video input from the first available system camera. The inference result is dumped into the path defined by obj-detector.output-path.video. The result is an AVI file.
This code is open source software licensed under the Apache 2.0 License