An AI-powered multi-agent system for real-time forest monitoring and threat detection
About โข Features โข Quick Start โข Architecture โข Agent Pipeline โข Testing โข Contributing
๐ Language: English โข เคนเคฟเคจเฅเคฆเฅ โข เฆฌเฆพเฆเฆฒเฆพ
Project SILVANUS is an advanced AI-driven acoustic monitoring system designed to protect forests from illegal activities. Named after the Roman god of forests, SILVANUS acts as a digital guardian for protected wilderness areas.
The system uses a Multi-Agent System (MAS) architecture with:
- Bio-acoustic analysis using CLAP embeddings
- Vector similarity search with Qdrant Cloud
- Large Language Models (Groq/OpenRouter/Ollama) for intelligent reasoning
- Real-time dashboard for monitoring and incident response
Illegal logging, poaching, and unauthorized intrusion cause billions in environmental damage annually.
- โ 24/7 automated monitoring using acoustic sensors
- โ AI-powered threat detection with high accuracy
- โ Multi-agent verification to reduce false positives
- โ Real-time alerts with GPS coordinates
- โ Historical correlation via Supabase for pattern detection
| Feature | Description |
|---|---|
| ๐๏ธ Real-time Acoustic Analysis | Spectral analysis to classify sounds using librosa |
| ๐ฏ Threat Detection | Identifies chainsaws, gunshots, vehicles using CLAP embeddings |
| ๐ค Multi-Agent System | Sentinel, Analyst, Validator, Dispatcher agents verify threats |
| ๐ Incident Reporting | Detailed reports with GPS coordinates and severity scores |
| ๐ฅ๏ธ Interactive Dashboard | Next.js web interface for real-time monitoring |
| ๐ Secure Architecture | API key authentication and role-based access |
| ๐ Historical Pattern Analysis | Queries Supabase for past incidents to detect repeat offenders |
| ๐ง Hybrid Intelligence | Combines rule-based speed with LLM adaptability |
# Clone the repository
git clone https://github.com/Nabarup1/Silvanus.git
cd Silvanus
# Install Python dependencies
pip install -r requirements.txt
# Run a simulation
python main.py simulate -n 10 --llm
# Analyze an audio file
python main.py analyze path/to/audio.wav --jsonOption 1: Automatic (Recommended)
Double-click start_dev.bat or run it from the terminal:
.\start_dev.batThis will launch both the Python Backend (Port 8000) and Next.js Frontend (Port 3000) in separate windows.
Option 2: Manual Start
- Backend:
uvicorn api:app --host 0.0.0.0 --port 8000 - Frontend:
cd web && npm run dev
SILVANUS integrates advanced bio-acoustics with Agentic AI. The system is divided into three core layers:
- Sensory Layer (The Ears): Captures audio, extracts spectral features, and generates CLAP embeddings.
- Cognitive Layer (The Brain): A LangGraph-based Multi-Agent System that reasons about the audio context.
- Interface Layer (The Face): A Next.js Web Dashboard for real-time monitoring and incident reporting.
graph TD
subgraph SENSORS ["SENSORY LAYER"]
Audio[Raw Audio] -->|Librosa| Features[Spectral Features]
Audio -->|CLAP Model| Embed[Vector Embedding]
end
subgraph BRAIN ["COGNITIVE LAYER (Python)"]
Embed -->|Query| Qdrant[(Forest Memory)]
Qdrant -->|Matches| Sentinel[๐ฎ SENTINEL NODE]
Sentinel -->|Threat Detected?| Analyst[๐ต๏ธ ANALYST NODE]
Sentinel -->|Clear| Safe[โ
Event Cleared]
Analyst -->|Check Registry + Supabase| AuthDB[(Auth + History)]
Analyst -->|Unauthorized?| Validator[๐ฌ VALIDATOR NODE]
Analyst -->|Authorized| Safe
Validator -->|Spectral Analysis| Decision{Threat Confirmed?}
Decision -->|Yes| Dispatcher[๐จ DISPATCHER NODE]
Decision -->|No| FalsePos[โ ๏ธ False Positive]
end
subgraph WEB ["INTERFACE LAYER (Next.js)"]
Dispatcher -->|JSON Report| API[Next.js API]
API -->|Real-time| Dashboard[Web Dashboard]
Dashboard -->|Alerts| Rangers[Forest Rangers]
end
style Sentinel fill:#1f2937,stroke:#10b981,color:#fff
style Analyst fill:#1f2937,stroke:#3b82f6,color:#fff
style Validator fill:#1f2937,stroke:#8b5cf6,color:#fff
style Dispatcher fill:#ef4444,stroke:#fff,color:#fff
sequenceDiagram
participant User as ๐ค User
participant UI as ๐ฅ๏ธ Next.js UI (Client)
participant API as โก API Route (/api/analyze)
participant Python as ๐ Python Core (Backend)
participant Cloud as โ๏ธ Qdrant Cloud
User->>UI: Uploads Audio File / Simulates Event
UI->>API: POST /api/analyze (FormData)
activate API
note right of API: Spawns Python Subprocess
API->>Python: python main.py analyze --json
activate Python
Python->>Cloud: Query Similarity (Embeddings)
Cloud-->>Python: Return Top K Matches
Python->>Python: Agent Loop (Sentinel->Analyst->Validator)
Python-->>API: Returns JSON Report
deactivate Python
API-->>UI: Returns Analysis Result
deactivate API
UI->>User: Displays Evidence & Threat Assessment
Silvanus/
โโโ src/ # ๐ PYTHON CORE ENGINE
โ โโโ agents/ # LangGraph Agent Definitions
โ โ โโโ graph.py # Main Cognitive Loop (State Machine)
โ โ โโโ state.py # Pydantic State Models
โ โ โโโ tools.py # Agent Tools (Memory/Registry/Supabase)
โ โโโ audio/ # Audio Processing Modules
โ โ โโโ embeddings.py # CLAP Model Integration
โ โ โโโ features.py # Spectral Feature Extractor
โ โ โโโ processor.py # Loading & Segmentation
โ โโโ memory/ # Knowledge Base
โ โ โโโ qdrant.py # Vector DB Interface (Cloud/Local)
โ โ โโโ registry.py # Authorization Rules
โ โโโ tools/ # Utilities
โ โโโ download_audioset.py # Bio-acoustic Dataset Downloader
โ โโโ ingest_audio.py # Vector Memory Ingestion Script
โโโ web/ # โ๏ธ NEXT.JS FRONTEND
โ โโโ src/app/ # App Router
โ โ โโโ api/analyze/ # API Route connecting to Python
โ โ โโโ demo/ # Interactive Demo Page
โ โ โโโ dashboard/ # Monitoring Dashboard
โ โโโ public/ # Assets & Images
โโโ tests/ # Python Test Suite
โโโ .github/workflows/ # CI/CD Pipelines
โโโ main.py # CLI Entry Point & Simulator
โโโ api.py # FastAPI Wrapper for Deployment
โโโ requirements.txt # Python Dependencies
The cognitive loop processes audio through a sequence of specialized agents:
| Step | Component | Description |
|---|---|---|
| 1 | Acoustic Processor | Loads raw audio via librosa, resamples to 48kHz, segments into 5-second chunks |
| 2 | CLAP Embedder | Converts audio to 512-dimensional vector using laion/clap-htsat-fused model |
| 3 | Sentinel Agent | First responder: queries Qdrant for threat similarity, checks baseline deviation, decides CLEAR or ESCALATE |
| 4 | Analyst Agent | Context checker: verifies GPS against authorization registry, queries Supabase for historical incidents |
| 5 | Validator Agent | Scientist: performs deep spectral analysis (RMS energy, spectral centroid, ZCR) to confirm MECHANICAL or ORGANIC source |
| 6 | Dispatcher Agent | Reporter: compiles final Incident Report (INC-XXXX), assigns severity (1-10), recommends actions |
| 7 | Forest Memory | Long-term storage in Qdrant Cloud: threat signatures, baseline sounds, event history |
Processing Flow:
Audio File -> CLAP Embed -> Sentinel -> Analyst (+Supabase) -> Validator -> Dispatcher -> Incident Report
| Requirement | Version | Purpose |
|---|---|---|
| Python | 3.10+ | Backend & AI models |
| Node.js | 18+ | Web dashboard |
| Qdrant | Cloud/1.7+ | Vector database |
| Groq | API Key | LLM Reasoning (recommended) |
pip install -r requirements.txt
python src/tools/download_audioset.py
python src/tools/ingest_audio.py data/audiosetcd web
npm install
npm run devCreate .env:
# Qdrant Cloud
QDRANT_HOST=your-cluster.cloud.qdrant.io
qdrant_silvanus_key=your_api_key
# LLM Provider (Groq recommended)
USE_GROQ=true
groq_api=your_groq_api_key
groq_model=llama-3.3-70b-versatile
# Supabase (for historical analysis)
supabase_silvanus_url=your_supabase_url
supabase_silvanus_key=your_supabase_key
# API Security
SILVANUS_API_KEY=your_secret_keypip install pytest pytest-cov
pytest tests/ -v
pytest tests/ --cov=srccd web
npm test
npm run test:coverage| Workflow | Versions | Status |
|---|---|---|
| Python | 3.10, 3.11 | |
| Node.js | 18, 20 |
- Fork the repository
- Create feature branch (
git checkout -b feature/AmazingFeature) - Commit changes (
git commit -m 'Add AmazingFeature') - Push to branch (
git push origin feature/AmazingFeature) - Open a Pull Request
GNU AGPLv3 License - see LICENSE file.
Made with ๐ for Forest Conservation
