Skip to content

Alenshibu/ELK_SURICATA_IDS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

IDS: ELK + Suricata Integrated with Threat Intelligence

Full project documentation and step-by-step setup for an ELK-based IDS (Suricata + Filebeat + Logstash + Elasticsearch + Kibana) with AlienVault OTX and VirusTotal enrichment.


Table of contents


Project Overview

This repository contains the documentation and configuration examples required to deploy a single-node ELK-based IDS pipeline that:

  • Captures network traffic using Suricata (running in a VM)
  • Forwards structured alerts (eve.json) via Filebeat to Logstash
  • Logstash parses, normalizes and enriches events with AlienVault OTX and VirusTotal
  • Stores events in Elasticsearch and visualizes them in Kibana (dashboards included)

This repo is intended as an educational/lab deployment for security students and small teams.


Architecture Diagram

(Place an architecture image in /docs/arch.png)

  • Suricata (VM) -> Filebeat (VM) --beats--> Logstash (Host) -> Elasticsearch -> Kibana
  • External enrichment: AlienVault OTX & VirusTotal (HTTP API)

Repository Layout

ELK-Suricata-IDS/
├─ ProjectELK-2.pdf        # Full project report (uploaded)
├─ README.md               # This file
├─ configs/
│  ├─ filebeat/filebeat.yml
│  ├─ logstash/pipeline/suricata-pipeline.conf
│  ├─ elasticsearch/elasticsearch.yml
│  └─ kibana/kibana.yml
├─ dashboards/              # Kibana ndjson exports or JSON visualizations
│  └─ suricata-dashboard.ndjson
├─ docs/
│  └─ arch.png
└─ LICENSE

Prerequisites

  • Two Linux machines (can be VMs):

    • Host: ELK Stack (Ubuntu 22.04 preferred)
    • VM: Suricata + Filebeat
  • Sudo access on both machines

  • Internet access for package downloads and API calls (OTX, VT)

  • API keys: AlienVault OTX API key and VirusTotal API key

  • Git & GitHub account for uploading


Installation — Step by Step (Quick Start)

Below are ordered, actionable steps. Run commands on the specified system.

Host (ELK) setup — (Run on Host machine)

  1. Update OS and install dependencies:

    sudo apt update && sudo apt upgrade -y
    sudo apt install apt-transport-https ca-certificates curl gnupg -y
  2. Install Elasticsearch, Logstash, Kibana (Deb packages)

    • Follow Elastic's repo instructions for the version you want. Example (simplified):
    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.x.deb
    sudo dpkg -i elasticsearch-9.x.deb
    sudo systemctl enable --now elasticsearch

    Repeat for Logstash & Kibana packages.

  3. Confirm Elasticsearch is up:

    curl -k https://localhost:9200
  4. Configure Elasticsearch (/etc/elasticsearch/elasticsearch.yml):

    • network.host: 0.0.0.0
    • Set heap in /etc/elasticsearch/jvm.options if needed Restart service: sudo systemctl restart elasticsearch
  5. Configure Kibana (/etc/kibana/kibana.yml): set server.host, elasticsearch.hosts, and SSL options if using HTTPS.

  6. Install Logstash and prepare the pipeline path: /etc/logstash/conf.d/ or /etc/logstash/pipeline/ depending on package.

VM (Suricata + Filebeat) setup — (Run on VM where traffic is captured)

  1. Update OS:

    sudo apt update && sudo apt upgrade -y
  2. Install Suricata:

    sudo add-apt-repository ppa:oisf/suricata-stable -y
    sudo apt update
    sudo apt install suricata -y
  3. Configure Suricata to log eve.json (default: /var/log/suricata/eve.json).

  4. Install Filebeat on the VM (or use the host if preferred):

    wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.x.deb
    sudo dpkg -i filebeat-8.x.deb
  5. Enable the Suricata module and configure output to Logstash (Host IP):

    sudo filebeat modules enable suricata
    sudo nano /etc/filebeat/filebeat.yml
    # configure output.logstash:
    # hosts: ["<LOGSTASH_HOST_IP>:5044"]
  6. Start services:

    sudo systemctl enable --now suricata
    sudo systemctl enable --now filebeat

Logstash pipeline & enrichment — (Run on Host)

  1. Create a Logstash pipeline file: /etc/logstash/pipeline/suricata-pipeline.conf (example provided in configs/).

  2. Pipeline responsibilities:

    • Accept beats input (port 5044)
    • Parse eve.json fields
    • Enrich with OTX and VirusTotal via http filter and use throttle/cache
    • Output to elasticsearch index suricata-*
  3. Restart Logstash:

    sudo systemctl restart logstash

Kibana dashboard import & visualizations

  1. Open Kibana in browser: http://<HOST_IP>:5601 (or configured port)
  2. Create index pattern suricata-* and set @timestamp as time field.
  3. Import the dashboard NDJSON from /dashboards/suricata-dashboard.ndjson via Stack Management → Saved Objects → Import.
  4. If you prefer Lens, follow the dashboard steps in the project PDF to recreate visualizations.

Configuration files (examples)

Full sample configs live in configs/ (placeholders). Below are short examples (the complete examples are in this repo):

  • filebeat.yml (excerpt)
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/suricata/eve.json

filebeat.modules:
- module: suricata

output.logstash:
  hosts: ["<LOGSTASH_HOST_IP>:5044"]
  • logstash pipeline (concept)
input { beats { port => 5044 } }
filter {
  json { source => "message" }
  mutate { add_field => { "[@metadata][target_index]" => "suricata-%{+YYYY.MM.dd}" } }
  # http enrichment to OTX/VT (with throttle/cache)
}
output { elasticsearch { hosts => ["http://localhost:9200"] index => "%{[@metadata][target_index]}" } }

Threat Intelligence Integration

  1. Get API keys:

    • AlienVault OTX: create account → get API key
    • VirusTotal: create account → obtain API key
  2. In Logstash use http filter (or ruby script) to call APIs. Example flow:

    • For each event, extract src_ip, dest_ip, and file.hash
    • Query OTX https://otx.alienvault.com/api/v1/indicators/IPv4/{ip}/reputation (add X-OTX-API-KEY header)
    • Query VirusTotal /api/v3/files/{hash} or /api/v3/ip_addresses/{ip} with x-apikey header
    • Cache responses and throttle to avoid rate limits

Important: Never push API keys to a public repo. Use environment variables or secrets management. Example in Logstash: http { url => ... headers => { "X-OTX-API-KEY" => "${OTX_API_KEY}" } } and export OTX_API_KEY in systemd or the Logstash environment file.


Testing & Validation

  1. Verify Suricata is producing eve.json:

    tail -f /var/log/suricata/eve.json
  2. Verify Filebeat is shipping to Logstash (on VM):

    sudo journalctl -u filebeat -f
  3. Verify Logstash received events:

    sudo tail -f /var/log/logstash/logstash-plain.log
    curl -s "http://localhost:9200/suricata-*/_search?size=1" | jq .
  4. Use test cases (curl, wget, hydra) listed in ProjectELK-2.pdf to generate alerts and confirm enrichment fields (otx.*, vt.*) appear in Kibana.


Troubleshooting & Tips

  • If Kibana shows no data: check index pattern, time range, and Logstash logs.
  • Filebeat to Logstash TLS issues: ensure correct certs and filebeat.yml truststore configured.
  • API rate limits: use throttle filter in Logstash and caching (Redis or local file cache) if needed.
  • Performance: use ILM, index rollover, and appropriate JVM heap sizes.

Roadmap / Future Enhancements

  • Elastic ML jobs for anomaly detection
  • Alerting integration (Slack, Email via Watcher)
  • SOAR automation (TheHive, Shuffle)
  • Multi-node Elasticsearch cluster

License & Contact

License: MIT

Maintainer: Alen Shibu — alenshibu102@gmail.com

End of README

Releases

No releases published

Packages

No packages published