BlurScene is a deep learning model designed to anonymize faces and license plates in traffic-related images and video data.
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt✅ Requires Python ≥ 3.10 (tested with 3.10.12)
All inference settings can be found in config/inference.yaml.
You must provide:
- ✅ Path to the model weights
.pt - ✅ Path to the model configuration
.yaml(Hydra config)
📁 It's recommended to store both in a weights/ folder (which is whitelisted in .dockerignore).
mirror_image: horizontal mirroringenlarged_regions_n: crops the image into n × n subregions
pre_merge_score_threshold,post_merge_score_threshold: filter weak predictionsmerging_method:wbf: Weighted Box Fusionnms: Non-Maximum Suppressionnmm: Non-Maximum Merging
merge_iou_threshold: IoU threshold for mergingarea_method: how box area is computed:int:(x1 - x0 + 1) * (y1 - y0 + 1)float:(x1 - x0) * (y1 - y0)
You can also configure logging level and format via the YAML.
You can download the pretrained model weights and matching configuration from the following link:
📁 Place the downloaded files inside the weights/ directory.
.pt and .yaml files match exactly — otherwise inference may fail.
You can test inference directly via:
python inference.py path/to/image.jpgThis loads the model, compiles it if necessary, and outputs detections for the image.
Start a local Flask server using:
python run_server.pyOr use Gunicorn (recommended):
gunicorn --config=config/gunicorn.py run_server:app🔧 Gunicorn reads environment variables:
WORKERSWORKER_TIMEOUTPORTLOG_LEVEL
Default address is http://0.0.0.0:5000.
curl -H "Content-Type: image/jpeg" --data-binary @test.jpg http://localhost:5000/anonymize --output returned_image.jpgdocker build . -t blursceneNote: Settings in inference.yaml (e.g. device) are fixed at build time.
docker run -d --gpus all -p 5000:5000 --name blurscene_container blurscene🔍 Monitor logs:
docker logs -f blurscene_containerWait for model compilation to complete before sending requests.
@misc{vosshans2024aeifdatacollectiondataset,
author = {Marcel Vosshans and Alexander Baumann and Matthias Drueppel and Omar Ait-Aider and Ralf Woerner and Youcef Mezouar and Thao Dang and Markus Enzweiler},
title = {The AEIF Data Collection: A Dataset for Infrastructure-Supported Perception Research with Focus on Public Transportation},
url = {https://arxiv.org/abs/2407.08261},
year = {2024},
}
This project is released under the MIT License.