Skip to content

A lightweight, real-time vehicle detection pipeline. It captures streams from RTSP sources or webcams, runs object detection via YOLOv8, and serves a live feed with traffic metrics through a WebSocket API.

Notifications You must be signed in to change notification settings

Sycritz/Real-Time-AI-Detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Real-Time AI Detection

A lightweight, real-time vehicle detection pipeline. It captures streams from RTSP sources or webcams, runs object detection via YOLOv8, and serves a live feed with traffic metrics through a WebSocket API.

Why We Built This

Mainly to explore real-time inference constraints and multithreaded Python applications. We wanted to understand:

  • How to handle RTSP streams without blocking existing frames
  • Managing producer-consumer patterns in a latency-sensitive context
  • Wiring up a FastAPI backend to serve annotated video streams
  • Handling graceful degradation when the model falls behind the camera feed

Architecture

The system is split into three decoupled components:

  1. Camera Module: Dedicated threads for reading frames. It handles reconnection logic independently to ensure the stream never dies, even if the feed drops temporarily.
  2. Inference Engine: A separate worker that pulls the latest frame from a circular buffer. If the model (YOLOv8) is slower than the camera, we drop frames intelligently to stay "real-time" rather than building up lag.
  3. API & Dashboard: A FastAPI server that exposes a WebSocket endpoint for the stream and REST endpoints for system health.

Limitations

  • CPU Bound: We are currently running this on CPU for portability. While YOLOv8n is fast, you might see lower FPS depending on your hardware. It supports CUDA if available.
  • Python Threads: We use Python threading, which is fine for I/O (camera reading), but the inference is still subject to the GIL. We mitigate this by keeping the inference step fast and decoupled.

Development Note

The frontend dashboard (static/) was "vibe coded" to quickly visualize the data. It's a simple HTML/JS page that connects to the WebSocket. It's not a production React app, but it gets the job done for the demo.

Running the Demo

  1. Install dependencies (conda environment recommended):

    pip install -r requirements.txt
  2. Configure your camera in config.yaml. By default, it looks for a local webcam (0).

  3. Start the server:

    python main.py
  4. Open the dashboard at http://localhost:8000.

Demo

Demo Video

About

A lightweight, real-time vehicle detection pipeline. It captures streams from RTSP sources or webcams, runs object detection via YOLOv8, and serves a live feed with traffic metrics through a WebSocket API.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •