Self-Atention and Self-Cooperation YOLOX (SASC-YOLOX) is an improved network based on YOLOX, with simple strategies but significant performance! It aims to extract precise features and is applied to smoke detection. This repo is an implementation of PyTorch version SASC-YOLOX.
In addition, we provide a smoke dataset composed of real smoke images and annotate it manually, termed the annotated real smoke of Xi’an Jiaotong University (XJTU-RS). These real images are from two benchmark datasets: CVPR and USTC. Our XJTU-RS dataset is available, and code is CIFR.
| Model | size | AP | AP50 | AP75 | APS | APM | APL | AR | Params (M) |
weights |
|---|---|---|---|---|---|---|---|---|---|---|
| YOLOX | 640 | 0.683 | 0.953 | 0.766 | 0.388 | 0.602 | 0.716 | 0.678 | 8.94 | down |
| SASC-YOLOX | 640 | 0.726 | 0.964 | 0.817 | 0.535 | 0.647 | 0.753 | 0.714 | 8.94 | down |
Installation
Step1. Install SASC-YOLOX.
git clone git@github.com:jingjing-maker/SASC-YOLOX.git
cd SASC-YOLOX
pip3 install -U pip && pip3 install -r requirements.txt
pip3 install -v -e . Step2. Install pycocotools.
pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'Demo
Step1. Download a pretrained model from the benchmark table.
Step2. Use either -n or -f to specify your detector's config. For example:
python tools/demo.py image -n yolox-s -c /path/to/your/yolox_s.pth --path ./test_img/002941.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]or
python tools/demo.py image -f exps/default/yolox_s.py -c /path/to/your/yolox_s.pth --path ./test_img/002941.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]python tools/demo.py image -f exps/default/yolox_s.py -c weights/yolox_s.pth --path ./test_img/002941.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [gpu]
Demo for video: python tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pth --path /path/to/your/video --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
</details>
<details>
<summary>Reproduce our results on COCO</summary>
Step1. Prepare COCO dataset
```shell
cd <YOLOX_HOME>
ln -s /path/to/your/COCO ./datasets/COCO
Step2. Reproduce our results on COCO by specifying -n:
python tools/train.py -n yolox-s -d 1 -b 64 --fp16 -o [--cache] # -d 8
yolox-m
yolox-l
yolox-x- -d: number of gpu devices
- -b: total batch size, the recommended number for -b is num-gpu * 8
- --fp16: mixed precision training
- --cache: caching imgs into RAM to accelarate training, which need large system RAM.
When using -f, the above commands are equivalent to:
python tools/train.py -f exps/default/yolox_s.py -d 8 -b 64 --fp16 -o [--cache]
exps/default/yolox_m.py
exps/default/yolox_l.py
exps/default/yolox_x.pyMulti Machine Training
We also support multi-nodes training. Just add the following args:
- --num_machines: num of your total training nodes
- --machine_rank: specify the rank of each node
Suppose you want to train YOLOX on 2 machines, and your master machines's IP is 123.123.123.123, use port 12312 and TCP.
On master machine, run
python tools/train.py -n yolox-s -b 128 --dist-url tcp://123.123.123.123:12312 --num-machines 2 --machine-rank 0On the second machine, run
python tools/train.py -n yolox-s -b 128 --dist-url tcp://123.123.123.123:12312 --num-machines 2 --machine-rank 1Evaluation
We support batch testing for fast evaluation:
python tools/eval.py -n yolox-s -c yolox_s.pth -b 64 -d 1 --conf 0.001 [--fp16] [--fuse]
yolox-m
yolox-l
yolox-x- --fuse: fuse conv and bn
- -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
- -b: total batch size across on all GPUs
To reproduce speed test, we use the following command:
python tools/eval.py -n yolox-s -c yolox_s.pth -b 1 -d 1 --conf 0.001 --fp16 --fuse
yolox-m
yolox-l
yolox-x- MegEngine in C++ and Python
- ONNX export and an ONNXRuntime
- TensorRT in C++ and Python
- ncnn in C++ and Java
- OpenVINO in C++ and Python
- The ncnn android app with video support: FeiGeChuanShu
- SASC-YOLOX with Tengine support: BUG1989
- SASC-YOLOX + ROS2 Foxy: Ar-Ray
- SASC-YOLOX Deploy DeepStream: nanmi
- SASC-YOLOX ONNXRuntime C++ Demo: DefTruth
- Converting darknet or yolov5 datasets to COCO format for SASC-YOLOX: Daniel
If you use SASC-YOLOX in your research, please cite our work by using the following BibTeX entry:
( To be continued.)
