Modular, platform-agnostic detection system for edge devices. Supports Raspberry Pi, NVIDIA Jetson, and desktop development.
Part of the NexaMesh counter-UAS platform.
- Platform Agnostic: Works on Raspberry Pi, Jetson, desktop, or any Linux system
- Modular Architecture: Swap cameras, inference engines, and trackers at runtime
- Multiple Inference Backends: TFLite, ONNX, Coral Edge TPU
- Object Tracking: Centroid and Kalman filter trackers
- Web Streaming: MJPEG stream with status dashboard
- Targeting System: Distance estimation and engagement control
- Auto-Configuration: Automatic hardware detection and optimization
- Raspberry Pi 4 or 5 (recommended) or Pi 3B+
- Raspberry Pi OS (64-bit recommended) or Ubuntu 22.04+
- Python 3.9+ (usually pre-installed)
- Camera: Pi Camera v2/v3 or USB webcam
- Optional: Coral USB Accelerator for 10x speedup
This quickstart assumes Raspberry Pi 4 with Pi Camera v2.1. For other platforms or configurations, see Advanced Setup.
git clone https://github.com/JustAGhosT/NexaMesh.git
cd NexaMesh/apps/detector
python3 -m venv venv && source venv/bin/activate
pip install -e ".[pi]"
# Enable camera (if not already)
sudo raspi-config # Interface Options > Camera > Enablepip install ultralytics tensorflow "flatbuffers==24.3.25"
python -c "from ultralytics import YOLO; \
YOLO('yolov5n.pt').export(format='tflite', imgsz=320, int8=True)"
mv yolov5nu_saved_model/yolov5nu_int8.tflite models/yolov5n_int8.tflitepython src/main.py --config configs/quickstart.yamlThat's it. You should see a live video feed with detections.
Next steps:
- Advanced Setup - Other platforms, cameras, Coral TPU
- Configuration Reference - All available settings
- Train Custom Model - Better accuracy for drone detection
Pi Camera not detected:
# Check if Pi Camera is detected
rpicam-hello --list-cameras
# If not detected, enable it
sudo raspi-config # Interface Options > Camera > Enable
sudo rebootUSB camera detected instead of Pi Camera:
The hardware auto-detection found a USB camera. Either:
- Use it: change
camera_type: usbin config, or - Fix Pi Camera detection (see above)
Override camera from CLI:
python src/main.py --config configs/quickstart.yaml --camera usbPi Camera capture fails (picamera2 not found):
sudo apt install -y python3-picamera2Missing aiohttp (streaming):
pip install aiohttpNumPy version conflicts:
tflite-runtime does not support NumPy 2.x. If you see errors like
module compiled using NumPy 1.x cannot run in NumPy 2.x:
pip install "numpy<2"Note: TFLite is transitioning to LiteRT which supports NumPy 2.
pi-drone-detector/
├── src/
│ ├── interfaces.py # Abstract base classes (Protocols)
│ ├── frame_sources.py # Camera implementations (Pi, USB, Video, Mock)
│ ├── inference_engines.py # Inference backends (TFLite, ONNX, Coral)
│ ├── trackers.py # Object trackers (Centroid, Kalman)
│ ├── renderers.py # Frame visualization
│ ├── streaming.py # MJPEG web streaming server
│ ├── alert_handlers.py # Alert handlers (Console, Webhook, File)
│ ├── targeting.py # Distance estimation & fire net control
│ ├── hardware.py # Hardware auto-detection
│ ├── factory.py # Component wiring and pipeline creation
│ ├── main.py # CLI entry point
│ ├── config/
│ │ ├── settings.py # Pydantic configuration models
│ │ └── constants.py # Immutable configuration values
│ └── utils/
│ ├── geometry.py # IoU, NMS, bbox utilities
│ └── logging_config.py # Structured logging
├── tests/
│ ├── conftest.py # Shared pytest fixtures
│ └── unit/ # Unit tests
├── azure-ml/ # Azure ML training infrastructure
│ ├── train.py # Training script
│ ├── job.yaml # Azure ML job definition
│ └── setup-azure.sh # One-click Azure setup
├── docs/ # Documentation
├── models/ # Trained models (.tflite, .onnx)
├── pyproject.toml # Project configuration & dependencies
├── requirements.txt # Core runtime dependencies
├── requirements-dev.txt # Development dependencies
└── README.md
The system uses a modular, interface-based architecture that allows swapping components:
┌─────────────────────────────────────────────────────────────┐
│ DetectionPipeline │
├─────────────────────────────────────────────────────────────┤
│ ┌────────────┐ ┌────────────────┐ ┌──────────────┐ │
│ │FrameSource │───>│InferenceEngine │───>│ObjectTracker │ │
│ └────────────┘ └────────────────┘ └──────────────┘ │
│ ▲ │ │ │
│ │ ▼ ▼ │
│ ┌────────────┐ ┌────────────────┐ ┌──────────────┐ │
│ │ Hardware │ │ DroneScorer │ │AlertHandler │ │
│ │ Detection │ └────────────────┘ └──────────────┘ │
│ └────────────┘ │ │
│ │ ▼ │
│ │ ┌────────────────┐ ┌──────────────┐ │
│ └──────────>│ Renderer │───>│ Streaming │ │
│ └────────────────┘ │ Server │ │
│ └──────────────┘ │
└─────────────────────────────────────────────────────────────┘
| Component | Implementations |
|---|---|
| FrameSource | PiCameraSource, USBCameraSource, |
| VideoFileSource, MockFrameSource | |
| InferenceEngine | TFLiteEngine, ONNXEngine, CoralEngine, |
| MockInferenceEngine | |
| ObjectTracker | NoOpTracker, CentroidTracker, |
| KalmanTracker | |
| AlertHandler | ConsoleAlertHandler, WebhookAlertHandler, |
| FileAlertHandler | |
| FrameRenderer | OpenCVRenderer, HeadlessRenderer, |
| StreamingRenderer |
Settings can be configured via:
- Environment Variables (highest priority) - Quick setup, good for deployment
- YAML Configuration File - Recommended for production, comprehensive settings
- Programmatic defaults - Built-in defaults
Note: Command-line arguments only support a subset of options (model, camera, tracker, etc.). Advanced features like streaming must be configured via environment variables or config files.
Create a config.yaml file to customize all settings:
camera_type: auto # auto, picamera, usb, video, mock
engine_type: auto # auto, tflite, onnx, coral, mock
tracker_type: centroid # none, centroid, kalman
capture:
width: 640
height: 480
fps: 30
inference:
model_path: "models/drone-detector_int8.tflite"
confidence_threshold: 0.5
nms_threshold: 0.45
num_threads: 4
targeting:
max_targeting_distance_m: 100.0
fire_net_enabled: false
fire_net_min_confidence: 0.85
fire_net_min_track_frames: 10
fire_net_arm_required: true
streaming:
enabled: false
port: 8080
quality: 80
max_fps: 15
display:
headless: false
show_fps: true # Show FPS counter on video feed
show_drone_score: trueAll settings can be overridden via environment variables. Use nested delimiter
__ for nested settings:
# Capture settings
export CAPTURE__WIDTH=1280
export CAPTURE__HEIGHT=720
export CAPTURE__FPS=30
# Inference settings
export INFERENCE__CONFIDENCE_THRESHOLD=0.6
export INFERENCE__NMS_THRESHOLD=0.45
# Streaming settings (most commonly used)
export STREAM_ENABLED=true
export STREAM_PORT=8080
export STREAM_QUALITY=80
export STREAM_MAX_FPS=15
# Alert settings
export ALERT__WEBHOOK_URL=https://api.example.com/detections
export ALERT__SAVE_DETECTIONS_PATH=detections.jsonUsing Configuration Files:
Generate and use a configuration file:
# Generate default config file
python src/main.py --generate-config config.yaml
# Edit config.yaml as needed, then run
python src/main.py --config config.yaml
# CLI arguments override config file values
python src/main.py --config config.yaml --confidence 0.7 --headless
# Override camera type from config file
python src/main.py --config config.yaml --camera usbProgrammatic Loading (for custom scripts):
from src.config.settings import Settings
# Load from YAML file
settings = Settings.from_yaml("config.yaml")
# Or load from environment variables
settings = Settings()
# Access settings
if settings.streaming.enabled:
print(f"Streaming on port {settings.streaming.port}")# Mock everything (no camera, no model needed)
python src/main.py --mock
# Real camera with mock inference (test camera setup)
python src/main.py --camera usb --engine mockFor all camera/inference options, see Advanced Setup.
Use the automated CI/CD pipeline for training:
- Go to Actions tab → ML Training Pipeline → Run workflow
- Select parameters (model, epochs, dataset)
- Download trained model from artifacts when complete
# 1. Setup infrastructure with Terraform
cd infra/terraform/ml-training
terraform init
terraform apply -var-file="environments/dev.tfvars"
# 2. Download training data
cd apps/detector
python scripts/download_public_datasets.py --output ./data --all
# Or with Roboflow API key for more datasets:
ROBOFLOW_API_KEY=xxx \
python scripts/download_public_datasets.py --output ./data --roboflow
# 3. Upload dataset to Azure ML
RG=$(cd ../../../infra/terraform/ml-training && terraform output -raw resource_group_name)
WS=$(cd ../../../infra/terraform/ml-training && terraform output -raw ml_workspace_name)
az ml data create --name drone-dataset --path ./data/combined \
--type uri_folder --resource-group $RG --workspace-name $WS
# 4. Submit training job
cd azure-ml
az ml job create --file job.yaml --resource-group $RG --workspace-name $WS
# 5. Monitor training
az ml job stream --name <JOB_NAME> --resource-group $RG --workspace-name $WS
# 6. Download and validate model
az ml job download --name <JOB_NAME> --output-name model --download-path ./outputs
python ../scripts/validate_model.py --model-dir ./outputs/model/exports --benchmark
# 7. Deploy to Pi
scp outputs/model/exports/drone-detector_int8.tflite pi@raspberrypi:~/models/| Parameter | Default | Description |
|---|---|---|
model |
yolov8n.pt | Base model (yolov8n/s/m) |
epochs |
100 | Training epochs |
imgsz |
320 | Image size |
batch |
16 | Batch size |
| Configuration | GPU | Time | Cost |
|---|---|---|---|
| MVP (10 classes) | T4 | 6-10 hrs | $3-5 |
| Full (27 classes) | T4 | 20-30 hrs | $10-15 |
| Full (27 classes) | V100 | 5-8 hrs | $15-25 |
| Configuration | FPS | Notes |
|---|---|---|
| Pi 4 + TFLite | 5-8 | Baseline |
| Pi 5 + TFLite | 12-18 | 2x faster |
| Pi 4 + Coral USB | 25-35 | 10x speedup |
| Pi 5 + Coral USB | 40-50 | Best performance |
The model supports multiple classification configurations. See configs/ for
options.
| Class ID | Name | Examples |
|---|---|---|
| 0 | drone | Quadcopters, multirotors, |
| fixed-wing UAVs, racing drones | ||
| 1 | bird_small | Sparrows, finches |
| (<40cm wingspan) | ||
| 2 | bird_large | Eagles, hawks, flocks |
| (>40cm wingspan) | ||
| 3 | aircraft | Planes, helicopters, gliders, |
| hot air balloons | ||
| 4 | recreational | Kites, party balloons, |
| weather balloons, RC planes | ||
| 5 | sports | Balls, frisbees, projectiles |
| 6 | debris | Plastic bags, paper, leaves, |
| feathers | ||
| 7 | insect | Flies, bees, dragonflies |
| (close to camera) | ||
| 8 | atmospheric | Rain, snow, lens flare, |
| artifacts | ||
| 9 | background | Sky, clouds, nothing detected |
For simpler use cases, use configs/dataset-binary.yaml:
| Class ID | Name | Examples |
|---|---|---|
| 0 | drone | All UAVs/drones |
| 1 | not_drone | Everything else |
{
"timestamp": "2026-01-07T12:00:00",
"class_id": 0,
"class_name": "drone",
"confidence": 0.87,
"bbox": [120, 80, 280, 200],
"drone_score": 0.92,
"is_drone": true
}-
SSH into your Pi:
ssh pi@raspberrypi.local # or use IP address -
Clone and setup:
cd ~ git clone https://github.com/JustAGhosT/NexaMesh.git cd NexaMesh/apps/detector # Create virtual environment python3 -m venv venv source venv/bin/activate # Install dependencies pip install -e ".[pi]" # Enable camera sudo raspi-config # Interface Options > Camera > Enable sudo reboot
-
Download or transfer model:
# Create models directory mkdir -p models/ # Transfer model from development machine # On your dev machine: scp models/drone-detector_int8.tflite pi@raspberrypi.local:~/NexaMesh/apps/detector/models/
-
Create config file (optional but recommended):
python src/main.py --generate-config config.yaml # Edit config.yaml as needed -
Test camera:
python src/main.py --camera auto --engine mock # Should open video window showing live feed -
Run detection:
python src/main.py --model models/drone-detector_int8.tflite --camera auto
Create a systemd service for automatic startup:
# Create service file
sudo nano /etc/systemd/system/drone-detector.serviceAdd the following (adjust paths as needed):
[Unit]
Description=NexaMesh Drone Detector
After=network.target
[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi/NexaMesh/apps/detector
Environment="PATH=/home/pi/NexaMesh/apps/detector/venv/bin"
ExecStart=/home/pi/NexaMesh/apps/detector/venv/bin/python src/main.py \
--config /home/pi/NexaMesh/apps/detector/config.yaml \
--headless
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.targetEnable and start the service:
sudo systemctl daemon-reload
sudo systemctl enable drone-detector.service
sudo systemctl start drone-detector.service
# Check status
sudo systemctl status drone-detector.service
# View logs
journalctl -u drone-detector.service -fSend detections to the main platform:
python src/main.py --model models/drone-detector_int8.tflite \
--alert-webhook https://your-phoenixrooivalk-api/api/detectionsOr configure in config.yaml:
alert:
webhook_url: "https://your-phoenixrooivalk-api/api/detections"The detector supports structured logging to files. Logging is disabled by default.
Add to your config.yaml:
logging:
level: INFO # DEBUG, INFO, WARNING, ERROR
log_file: "detector.log" # Path to log file
json_format: false # Use JSON format for easy parsing
max_bytes: 10000000 # Rotate at 10MB
backup_count: 5 # Keep 5 backup filesThen run:
python src/main.py --config config.yaml --model models/drone-detector.tfliteexport LOG_LEVEL=INFO
export LOG_LOG_FILE=detector.log
export LOG_JSON_FORMAT=false
python src/main.py --model models/drone-detector.tfliteLog files are written to the path specified in log_file:
- Default if not specified: No log file (console only)
- Example:
detector.log→ current directory - Example:
/var/log/detector.log→ system log directory - Example:
logs/detector.log→logs/directory (created automatically)
Log files automatically rotate when they reach max_bytes (default: 10MB):
detector.log- current logdetector.log.1- previous logdetector.log.2- older log- ... up to
backup_countfiles
Console Output (default):
2026-01-07 12:00:00 [INFO] drone_detector: Pipeline started
2026-01-07 12:00:01 [WARNING] drone_detector: DRONE DETECTED: conf=0.87 score=0.92
File Output (JSON format):
{"timestamp": "2026-01-07T12:00:00Z", "level": "INFO",
"logger": "drone_detector", "message": "Pipeline started",
"module": "main", "function": "main", "line": 432}
{"timestamp": "2026-01-07T12:00:01Z", "level": "WARNING",
"logger": "drone_detector", "message": "DRONE DETECTED: conf=0.87 score=0.92",
"frame_number": 150, "confidence": 0.87, "drone_score": 0.92}Save detections to JSON file:
python src/main.py --model models/drone-detector.tflite --save-detections detections.jsonThis creates a JSON file with all detection events:
[
{
"timestamp": "2026-01-07T12:00:01",
"frame_number": 150,
"source_id": "picamera",
"class_id": 0,
"class_name": "drone",
"confidence": 0.87,
"bbox": [120, 80, 280, 200],
"drone_score": 0.92,
"is_drone": true
}
]Note: Log files (*.log) and detection files (detections*.json) are
automatically ignored by Git (see .gitignore).
Quick Camera Test with Detector (Recommended):
The easiest way to verify your camera works is to run the detector with mock inference:
# Test USB webcam with mock inference (no model needed)
python src/main.py --camera usb --engine mock
# Test Pi Camera with mock inference
python src/main.py --camera picamera --engine mock
# Auto-detect camera with mock inference
python src/main.py --camera auto --engine mockWhat to Look For (Camera is Working):
-
Startup Output:
Pipeline started: Frame source: picamera # or "usb" or "opencv" (not "mock"!) Resolution: (640, 480) # Actual camera resolution Inference: mock -
Video Window:
- A window opens showing live camera feed
- You should see what the camera sees in real-time
- Mock detections appear as colored boxes (for testing)
-
FPS Counter:
- Top-left corner shows:
FPS: XX.X (XX.Xms) - FPS should be > 0 (typically 15-30 FPS)
- Counter updates smoothly
- Top-left corner shows:
-
Frame Counter:
- Bottom of window shows:
Frame: XXX | Detections: X | Tracks: X - Frame number increases continuously
- No "Failed to read frame" errors in console
- Bottom of window shows:
-
Console Output:
- Should see periodic
[DRONE DETECTED]messages (from mock inference) - No errors about camera not found
- Should see periodic
If Camera is NOT Working:
- You'll see:
ERROR: Failed to open frame sourceorERROR: Failed to start pipeline - No video window appears
- Frame source shows as "mock" instead of actual camera type
- Console shows "Failed to read frame" repeatedly
System-Level Camera Tests (Troubleshooting):
# For Pi Camera (libcamera)
libcamera-hello --timeout 5000 # Should show preview window
libcamera-vid -t 5000 -o test.h264 # Records 5 seconds
# For USB webcam (OpenCV test)
python3 -c "import cv2; cap = cv2.VideoCapture(0); \
print('Camera opened:', cap.isOpened()); \
ret, frame = cap.read(); \
print('Frame read:', ret, 'Shape:', \
frame.shape if ret else 'N/A'); cap.release()"
# Check video devices
ls -la /dev/video* # Should list video devices
v4l2-ctl --list-devices # List all video devices with details
# Check USB devices
lsusb | grep -i camera # List USB cameras# Enable camera in raspi-config
sudo raspi-config # Interface Options > Camera > Enable
# Reboot after enabling
# For Pi Camera - test libcamera
libcamera-hello --timeout 5000
# For USB webcam - check if detected
lsusb | grep -i camera
v4l2-ctl --list-devices
# Check permissions
groups $USER # Should include 'video' group
# If not: sudo usermod -aG video $USER (then logout/login)- Use smaller input size:
--width 320 --height 240 - Add Coral USB Accelerator
- Use Pi 5 instead of Pi 4
- Enable headless mode:
--headless
# Install Pi-optimized TFLite runtime
pip install tflite-runtimeThe targeting module provides distance estimation and engagement control:
Uses pinhole camera model to estimate target distance:
distance = (assumed_drone_size * focal_length_px) / bbox_size_pxConfigure in config.yaml:
targeting:
assumed_drone_size_m: 0.30 # Assumed drone size (30cm)
max_targeting_distance_m: 100.0 # Max tracking distanceSafety interlocks before engagement:
- System must be armed
- Minimum confidence threshold (0.85)
- Minimum track frames (10)
- Distance envelope (5m - 50m)
- Velocity threshold (< 30 m/s)
- Cooldown period between triggers
# Install development dependencies
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install# Run all tests
pytest
# Run with coverage
pytest --cov=src --cov-report=html
# Run specific test file
pytest tests/unit/test_targeting.py -v# Format code
black src tests
isort src tests
# Lint
ruff check src tests
# Type check
mypy srcThe project uses GitHub Actions for continuous integration:
- Lint: ruff, black, isort, mypy
- Test: pytest on Python 3.9-3.11
- Security: pip-audit, bandit
Comprehensive documentation is available in the docs/ directory:
- Configuration Reference - Complete guide to all configurable settings, YAML files, and environment variables
- Countermeasure Options - Guide to countermeasure systems that can be triggered by the detector's GPIO output
- Product Catalog - Complete specifications, bill of materials, and build guides for all products
- Architecture - System architecture and component design
- Classification Taxonomy - Object classification system and class definitions
- Training Cost Estimate - Cost analysis for model training on Azure ML
- Switch Guide - Guide to switching between different components
See docs/ for detailed API documentation.
MIT - Part of NexaMesh project