Disclaimer: This project is for portfolio and showcase purposes only. It demonstrates the design and logic of a Distributed Host Integrity Monitoring Platform but requires additional hardening for production use.
The Distributed Host Integrity Monitoring Platform (DHIMP) is a scalable security solution designed to monitor the integrity of files across multiple distributed servers from a single viewpoint.
The system uses a decentralized architecture where each monitored server functions as an independent unit containing its own Agent and Backend Aggregator. A Centralized Dashboard then connects to these distributed units, allowing administrators to monitor security events across the entire infrastructure through a single "Single Pane of Glass" interface.
Each monitored server operates as a self-contained unit (Agent + Backend + Infra). The Centralized Dashboard aggregates data from all these sources.
graph TD
subgraph "Server Unit (Repeated for each Monitored Server)"
direction TB
subgraph "Host System"
FS["File System (Create/Modify/Delete)"]
Auditd["Auditd Service (Syscall Logs)"]
Incrond["Incrond Daemon (File Watcher)"]
end
subgraph "Agent Layer"
Agent["Python Agent Script"]
YARA["YARA Engine (Malware Scan)"]
end
subgraph "Backend Layer (Dockerized)"
API["Django REST API (Gunicorn)"]
DB[("SQLite Database")]
Cron["Docker Cron Job (Auto-Purging)"]
Archive[("CSV Archive (Monthly Reports)")]
end
%% Internal Data Flow
FS -- "Trigger Event" --> Incrond
Incrond -- "Spawn With Args" --> Agent
Agent -- "1. Query User (ausearch)" --> Auditd
Agent -- "2. Scan File" --> YARA
Agent -- "3. Send Alert (JSON)" --> API
API -- "Store Data" --> DB
Cron -- "Monthly Archive" --> DB
DB -- "Export Logs" --> Archive
end
subgraph "SaaS Central View"
Dashboard["Next.js Central Dashboard"]
end
%% External Data Flow
Dashboard -- "Fetch Aggregated Logs (REST)" --> API
Dashboard -- "Check Service Health" --> API
API -- "Status Check" --> Incrond
While the diagram above shows the internal logic of a single server, the diagram below illustrates how the Centralized Dashboard connects to multiple distributed nodes to form a unified monitoring network.
graph TD
User(("π€ Security Admin"))
subgraph "Central Command (SaaS Layer)"
Dashboard["π₯οΈ Next.js Dashboard (Single Pane of Glass)"]
end
subgraph "Distributed Infrastructure"
direction LR
NodeA["Server A"]
NodeB["Server B"]
NodeC["Server C"]
NodeD["Server D"]
end
%% Flows
User ==> Dashboard
Dashboard -- "1. Aggregated Query (HTTPS)" --> NodeA
Dashboard -- "2. Aggregated Query (HTTPS)" --> NodeB
Dashboard -- "3. Aggregated Query (HTTPS)" --> NodeC
Dashboard -- "4. Aggregated Query (HTTPS)" --> NodeD
linkStyle 1,2,3,4 stroke:#2980b9,stroke-width:2px;
.
βββ agent/ # [PER-SERVER] Host IDS Agent
β βββ config/ # Configuration templates
β β βββ crontab.example # Example Cron jobs
β β βββ incrontab.example # Example Incron rules
β βββ rules/ # Detection Rules
β β βββ yara-rules.yar # YARA definitions for malware scanning
β βββ src/
β β βββ agent.py # Main Logic: Watcher -> Audit -> YARA -> API
β βββ requirements.txt # Python dependencies (requests, yara-python)
β βββ README.md
β
βββ backend/ # [PER-SERVER] Local Data Aggregator
β βββ api/ # API Application
β β βββ views.py # Endpoints (Ingest, Analytics)
β β βββ models.py # DB Schema (FimLog)
β β βββ serializers.py # Data Validation
β β βββ urls.py # Router
β βββ backend/ # Project Settings
β β βββ settings.py # Django Config (SQLite, JWT)
β βββ docker-compose.yml # Services: Backend, Prometheus, Grafana, Node-Exporter
β βββ Dockerfile # Gunicorn Production Build
β βββ manage.py
β
βββ frontend/ # [CENTRALIZED] Analytical Dashboard
β βββ src/
β β βββ app/ # Next.js App Router (Dashboard, Login)
β β βββ components/ # UI Components (Visx Charts, Lucide Icons)
β β βββ lib/ # Utils
β βββ public/ # Static Assets
β βββ Dockerfile # Production Docker Build
β βββ next.config.ts # Next.js Config
β βββ tailwind.config.js # Styling Config
β βββ package.json
β
βββ infra/ # Deployment Automation
β βββ deploy-backend.yml # Ansible Playbook for Backend
β βββ inventory.ini # Server Inventory
β
βββ monthly-reports/ # Reporting Assets- Host-Based Real-time Agent: Built on Python & Linux Auditd to capture granular forensic details (User ID, Process Name, Working Directory). Implements Context-Aware Filtering to distinguish anomalies based on operational hours.
- Hybrid Threat Detection: Combines Behavioral Analysis with YARA Content Inspection to detect complex attack patterns like RCE, PHP Webshells, and Obfuscated Code.
- Offline Data Resilience: Integrated SQLite local buffering to prevent log loss during network instability, with automated re-sync mechanisms.
- Independently Operable Backends: Distributed Django architecture where each node operates independently, secured by Shared-Secret JWT Authentication.
- Hardened Monitoring Stack: Containerized Prometheus & Grafana with Localhost Binding (127.0.0.1), enforcing access exclusively via secure SSH Tunnels.
- Autonomous Watchdog & Health: Real-time Incron monitoring and Cron-based self-healing routines to ensure 99.9% service uptime on edge nodes.
- Unified Dashboard (Single Pane of Glass): Next.js interface featuring Multi-Server API Aggregation for real-time monitoring of global nodes in a single session.
- Automated Data Lifecycle: Docker-based Cron jobs for Monthly Archival & Auto-Purging, maintaining peak database performance by offloading historical data to CSV.
- Deep-Clean CI/CD Pipeline: GitHub Actions & Ansible strategy featuring autonomous dependency handling and Docker Residue Cleanup for stable deployments.
- Shift-Left Security: Integrated Trivy Vulnerability Scanner with a Zero-Tolerance Policy for Critical/High vulnerabilities during the build phase.
- Python 3: The core runtime for the agent logic.
- YARA: Embedded malware analysis engine for signature-based detection.
- Cron: Manages automated system tasks and maintenance to ensure high availability and data integrity.
- Watchdog Mechanism: Periodic health-checks and self-healing for Incron and Auditd services.
- Automated Maintenance: Monthly service rotation and physical monitoring integrity tests.
- Data Retention Policy: Monthly automated log archival to CSV and database cleanup via Django management commands.
- Incron: Uses
inotifytriggers for real-time file system monitoring. - Linux Auditd: Captures deep forensic data (User ID, Process Name) via syscalls.
- Bash Scripting: Used for self-healing and service recovery mechanisms.
- SQLite: Provides local buffering to ensure no logs are lost during network outages.
- Django REST Framework: High-performance edge backend for data aggregation.
- Next.js: The framework for the Centralized Dashboard (Single Pane of Glass).
- TypeScript: Ensures type safety and code quality across the frontend.
- Tailwind CSS: Utility-first CSS framework for rapid and responsive UI design.
- JWT (Shared-Secret): Secure cross-server authentication mechanism.
- Docker: Containerization for consistent deployment across all environments.
- Prometheus & Grafana: Complete infrastructure monitoring stack.
- GitHub Actions: Automated CI/CD pipelines for testing and deployment.
- Ansible: Configuration management for mass-deployment of agents.
- Trivy: Vulnerability scanning to ensure secure container images.
- AWS EC2: The production environment hosting the distributed nodes.
Run this on each server you want to monitor.
# 1. Clone & Configure Infra
cd infra
ansible-playbook -i inventory.ini deploy-backend.yml
# 2. Manual Start (if not using Ansible)
cd backend
docker-compose up -d --buildThis starts the Local Backend Aggregator (Gunicorn/Django) and Monitoring Stack (Prometheus/Grafana).
Run these steps on the same monitored server:
cd agent
pip install -r requirements.txt
# Ensure dependencies like yara-python and requests are installedTrigger the agent immediately upon file events.
# 1. Install Incron
sudo apt install incron
echo "root" >> /etc/incron.allow
# 2. Add Rule (Edit with: sudo incrontab -e)
# Format: <Directory> <Events> <Command>
/var/www/html IN_MODIFY,IN_CREATE,IN_DELETE /usr/bin/python3 /path/to/agent/src/agent.py $@/$# $%Ensure self-healing and periodic health checks.
# Edit crontab: sudo crontab -e
# 1. Check Incron Status (Every 2 mins) - Reports to Dashboard
*/2 * * * * /path/to/agent/scripts/check_incron.sh /tmp/incron_status.txt
# 2. Self-Healing Auditd (Every 5 mins) - Restarts daemon if crashed
*/5 * * * * /path/to/agent/scripts/auditd_healer.sh >> /var/log/fim_healer.log 2>&1
# 3. Monthly Deep Maintenance (1st of Month) - Log rotation & Integrity Test
0 3 1 * * /path/to/agent/scripts/maintain_auditd.shEdit agent/src/agent.py to point to your local backend:
API_INGEST_URL = "http://127.0.0.1:8000/api/ingest/fim/"Run this once on your admin machine or central server.
cd frontend
cp .env.local.example .env.localEdit .env.local to list all your monitored servers:
NEXT_PUBLIC_API_MAIN=https://server1.com/api
NEXT_PUBLIC_API_SERVER2=https://server2.com/api
NEXT_PUBLIC_API_SERVER3=https://server3.com/apiStart the dashboard:
npm install
npm run dev
# OR with Docker
docker build -t fim-dashboard . && docker run -p 3000:3000 fim-dashboard- Ensure the Centralized Dashboard has network access to the Backend Aggregator ports (default: 8000) on all monitored servers.
- Use HTTPS in production to secure the data in transit between Server Units and the Dashboard.