Full project documentation and step-by-step setup for an ELK-based IDS (Suricata + Filebeat + Logstash + Elasticsearch + Kibana) with AlienVault OTX and VirusTotal enrichment.
This repository contains the documentation and configuration examples required to deploy a single-node ELK-based IDS pipeline that:
- Captures network traffic using Suricata (running in a VM)
- Forwards structured alerts (
eve.json) via Filebeat to Logstash - Logstash parses, normalizes and enriches events with AlienVault OTX and VirusTotal
- Stores events in Elasticsearch and visualizes them in Kibana (dashboards included)
This repo is intended as an educational/lab deployment for security students and small teams.
(Place an architecture image in /docs/arch.png)
- Suricata (VM) -> Filebeat (VM) --beats--> Logstash (Host) -> Elasticsearch -> Kibana
- External enrichment: AlienVault OTX & VirusTotal (HTTP API)
ELK-Suricata-IDS/
├─ ProjectELK-2.pdf # Full project report (uploaded)
├─ README.md # This file
├─ configs/
│ ├─ filebeat/filebeat.yml
│ ├─ logstash/pipeline/suricata-pipeline.conf
│ ├─ elasticsearch/elasticsearch.yml
│ └─ kibana/kibana.yml
├─ dashboards/ # Kibana ndjson exports or JSON visualizations
│ └─ suricata-dashboard.ndjson
├─ docs/
│ └─ arch.png
└─ LICENSE
-
Two Linux machines (can be VMs):
- Host: ELK Stack (Ubuntu 22.04 preferred)
- VM: Suricata + Filebeat
-
Sudo access on both machines
-
Internet access for package downloads and API calls (OTX, VT)
-
API keys: AlienVault OTX API key and VirusTotal API key
-
Git & GitHub account for uploading
Below are ordered, actionable steps. Run commands on the specified system.
-
Update OS and install dependencies:
sudo apt update && sudo apt upgrade -y sudo apt install apt-transport-https ca-certificates curl gnupg -y -
Install Elasticsearch, Logstash, Kibana (Deb packages)
- Follow Elastic's repo instructions for the version you want. Example (simplified):
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.x.deb sudo dpkg -i elasticsearch-9.x.deb sudo systemctl enable --now elasticsearchRepeat for Logstash & Kibana packages.
-
Confirm Elasticsearch is up:
curl -k https://localhost:9200
-
Configure Elasticsearch (
/etc/elasticsearch/elasticsearch.yml):network.host: 0.0.0.0- Set heap in
/etc/elasticsearch/jvm.optionsif needed Restart service:sudo systemctl restart elasticsearch
-
Configure Kibana (
/etc/kibana/kibana.yml): setserver.host,elasticsearch.hosts, and SSL options if using HTTPS. -
Install Logstash and prepare the pipeline path:
/etc/logstash/conf.d/or/etc/logstash/pipeline/depending on package.
-
Update OS:
sudo apt update && sudo apt upgrade -y -
Install Suricata:
sudo add-apt-repository ppa:oisf/suricata-stable -y sudo apt update sudo apt install suricata -y
-
Configure Suricata to log
eve.json(default:/var/log/suricata/eve.json). -
Install Filebeat on the VM (or use the host if preferred):
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.x.deb sudo dpkg -i filebeat-8.x.deb
-
Enable the Suricata module and configure output to Logstash (Host IP):
sudo filebeat modules enable suricata sudo nano /etc/filebeat/filebeat.yml # configure output.logstash: # hosts: ["<LOGSTASH_HOST_IP>:5044"]
-
Start services:
sudo systemctl enable --now suricata sudo systemctl enable --now filebeat
-
Create a Logstash pipeline file:
/etc/logstash/pipeline/suricata-pipeline.conf(example provided inconfigs/). -
Pipeline responsibilities:
- Accept beats input (port 5044)
- Parse
eve.jsonfields - Enrich with OTX and VirusTotal via
httpfilter and usethrottle/cache - Output to
elasticsearchindexsuricata-*
-
Restart Logstash:
sudo systemctl restart logstash
- Open Kibana in browser:
http://<HOST_IP>:5601(or configured port) - Create index pattern
suricata-*and set@timestampas time field. - Import the dashboard NDJSON from
/dashboards/suricata-dashboard.ndjsonvia Stack Management → Saved Objects → Import. - If you prefer Lens, follow the dashboard steps in the project PDF to recreate visualizations.
Full sample configs live in
configs/(placeholders). Below are short examples (the complete examples are in this repo):
- filebeat.yml (excerpt)
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/suricata/eve.json
filebeat.modules:
- module: suricata
output.logstash:
hosts: ["<LOGSTASH_HOST_IP>:5044"]
- logstash pipeline (concept)
input { beats { port => 5044 } }
filter {
json { source => "message" }
mutate { add_field => { "[@metadata][target_index]" => "suricata-%{+YYYY.MM.dd}" } }
# http enrichment to OTX/VT (with throttle/cache)
}
output { elasticsearch { hosts => ["http://localhost:9200"] index => "%{[@metadata][target_index]}" } }
-
Get API keys:
- AlienVault OTX: create account → get API key
- VirusTotal: create account → obtain API key
-
In Logstash use
httpfilter (orrubyscript) to call APIs. Example flow:- For each event, extract
src_ip,dest_ip, andfile.hash - Query OTX
https://otx.alienvault.com/api/v1/indicators/IPv4/{ip}/reputation(addX-OTX-API-KEYheader) - Query VirusTotal
/api/v3/files/{hash}or/api/v3/ip_addresses/{ip}withx-apikeyheader - Cache responses and throttle to avoid rate limits
- For each event, extract
Important: Never push API keys to a public repo. Use environment variables or secrets management. Example in Logstash: http { url => ... headers => { "X-OTX-API-KEY" => "${OTX_API_KEY}" } } and export OTX_API_KEY in systemd or the Logstash environment file.
-
Verify Suricata is producing
eve.json:tail -f /var/log/suricata/eve.json
-
Verify Filebeat is shipping to Logstash (on VM):
sudo journalctl -u filebeat -f
-
Verify Logstash received events:
sudo tail -f /var/log/logstash/logstash-plain.log curl -s "http://localhost:9200/suricata-*/_search?size=1" | jq .
-
Use test cases (curl, wget, hydra) listed in
ProjectELK-2.pdfto generate alerts and confirm enrichment fields (otx.*,vt.*) appear in Kibana.
- If Kibana shows no data: check index pattern, time range, and Logstash logs.
- Filebeat to Logstash TLS issues: ensure correct certs and
filebeat.ymltruststore configured. - API rate limits: use
throttlefilter in Logstash and caching (Redis or local file cache) if needed. - Performance: use ILM, index rollover, and appropriate JVM heap sizes.
- Elastic ML jobs for anomaly detection
- Alerting integration (Slack, Email via Watcher)
- SOAR automation (TheHive, Shuffle)
- Multi-node Elasticsearch cluster
License: MIT
Maintainer: Alen Shibu — alenshibu102@gmail.com
End of README