-
Notifications
You must be signed in to change notification settings - Fork 3
YOLO inference takes too long, causing video stuttering and lag #2
Copy link
Copy link
Open
Description
- test video(people 30FPS): https://drive.google.com/file/d/1c90Ip1jVieWfEQjmw67LsWxTn7Xd1sYi/view?usp=sharing
- test video(cars 60FPS): https://drive.google.com/file/d/1VxqCZFpaa13BBLIqKZG7_ehj4wcp-k-Z/view?usp=sharing
inference time: 51 ~ 499ms/ 63 ~ 510ms
deepsort time: 1.1 ~ 10.5ms/ 1.1 ~ 20 ms
7% frame loss when 60FPS
The video playback is inconsistent, with stuttering or jittering.
- cars.mp4 // 60 FPS 1280x720
- log: cars.log
daniel@daniel-nvidia:~/Work/jetson-fpv$ ./scripts/tools/analysis_raw_log.sh cars.log
dos2unix: converting file cars.log to Unix format...
### Video log reading ...
Processing: [====================] 100%
### Video frame consistency check ...
Processing: [====================] 100%
Video(totl): 1347 frames
Video(lost): 0 frames
Video: frames are continuous
### Inference consistency check ...
Processing: [====================] 100%
Inference(totl): 1330 frames
Inference(eval): 344 frames - 25%
Inference(skip): 893 frames - 67%
Inference(lost): 94 frames - 7%
Inference(inf_min): 0.05106854438781738 second
Inference(inf_max): 0.4998948574066162 second
Inference(track_min): 0.001172780990600586 second
Inference(track_max): 0.01059722900390625 second
Inference: frames experience loss
Inference: specific missing frame are 21 22 23 24 25 26 27 28 29 30 31 301 302 333 352 389 410 411 419 469 486 502 547 548 559 564 573 578 579 580 581 582 600 601 615 653 658 659 683 684 695 710 715 725 730 736 751 752 781 790 795 804 805 806 822 857 858 887 888 889 910 911 961 976 977 995 1019 1034 1050 1073 1099 1110 1115 1116 1137 1146 1151 1156 1157 1169 1190 1191 1197 1198 1215 1222 1245 1269 1278 1279 1301 1306 1307 1308
- people.mp4 // 25 FPS 1280x720
- log: people.log
daniel@daniel-nvidia:~/Work/jetson-fpv$ ./scripts/tools/analysis_raw_log.sh people.log
dos2unix: converting file people.log to Unix format...
### Video log reading ...
Processing: [====================] 100%
### Video frame consistency check ...
Processing: [====================] 100%
Video(totl): 200 frames
Video(lost): 0 frames
Video: frames are continuous
### Inference consistency check ...
Processing: [====================] 100%
Inference(totl): 188 frames
Inference(eval): 165 frames - 87%
Inference(skip): 24 frames - 12%
Inference(lost): 0 frames - 0%
Inference(inf_min): 0.06393694877624512 second
Inference(inf_max): 0.5100603103637695 second
Inference(track_min): 0.0011720657348632812 second
Inference(track_max): 0.002038717269897461 second
Inference: frames are continuous
daniel@daniel-nvidia:~/Work/jetson-fpv$
- Test Env
Software part of jetson-stats 4.3.1 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Jetson Orin Nano Developer Kit - Jetpack 6.2 [L4T 36.4.3]
NV Power Mode[0]: 15W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
- P-Number: p3767-0005
- Module: NVIDIA Jetson Orin Nano (Developer kit)
Platform:
- Distribution: Ubuntu 22.04 Jammy Jellyfish
- Release: 5.15.148-tegra
jtop:
- Version: 4.3.1
- Service: Active
Libraries:
- CUDA: 12.6.68
- cuDNN: 9.3.0.75
- TensorRT: 10.3.0.30
- VPI: 3.2.4
- OpenCV: 4.11.0 - with CUDA: YES
--------------------------------
NVIDIA SDK:
DeepStream C/C++ SDK version: 7.1
jetson-inference version: c038530 (dirty)
jetson-utils version: 6d5471c
--------------------------------
Python Environment:
Python 3.10.12
GStreamer: YES (1.20.3)
NVIDIA CUDA: YES (ver 12.6, CUFFT CUBLAS FAST_MATH)
OpenCV version: 4.11.0 CUDA True
YOLO version: 8.3.89
PYCUDA version: 2024.1.2
Torch version: 2.5.0a0+872d972e41.nv24.08
Torchvision version: 0.20.0a0+afc54f7
DeepStream SDK version: 1.2.0
onnxruntime version: 1.21.0
onnxruntime-gpu version: 1.20.0
--------------------------------
FPV Environment:
jetson-fpv version: d2a0558 dirty
WFB-ng version: 25.2.25.64703
MSPOSD version: c28d645 20250313_214314
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels