Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .claude/mcp-start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ if [ ! -f ".venv/bin/blhackbox" ]; then
.venv/bin/pip install -e . --quiet >&2
fi

# Load .env if present (for NEO4J_*, OLLAMA_*, etc.)
# Load .env if present (for NEO4J_*, etc.)
# API keys (ANTHROPIC_API_KEY, OPENAI_API_KEY) are intentionally commented
# out in .env.example — Claude Code provides its own authentication.
if [ -f ".env" ]; then
Expand Down
13 changes: 0 additions & 13 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -53,19 +53,6 @@ NEO4J_PASSWORD=changeme-min-8-chars
# Neo4j Aura alternative (cloud):
# NEO4J_URI=neo4j+s://xxxxxxxx.databases.neo4j.io

# ── Ollama (optional — legacy local pipeline) ──────────────────────
# The MCP host (Claude) now handles data aggregation directly.
# These settings are only needed if you enable the Ollama pipeline:
# docker compose --profile ollama up -d
#
# OLLAMA_MODEL=llama3.1:8b
# OLLAMA_TIMEOUT=300
# OLLAMA_NUM_CTX=8192
# OLLAMA_KEEP_ALIVE=10m
# OLLAMA_RETRIES=2
# AGENT_TIMEOUT=1200
# AGENT_RETRIES=2

# ── OpenAI (optional — for ChatGPT MCP clients on host) ────────────
# Required for ChatGPT / OpenAI MCP clients (host-based only).
# Get your key at platform.openai.com
Expand Down
16 changes: 0 additions & 16 deletions .github/workflows/build-and-push.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,22 +48,6 @@ jobs:
dockerfile: docker/screenshot-mcp.Dockerfile
tag_prefix: "screenshot-mcp-"
scout: false
- service: ollama-mcp
dockerfile: docker/ollama-mcp.Dockerfile
tag_prefix: "ollama-mcp-"
scout: true
- service: agent-ingestion
dockerfile: docker/agent-ingestion.Dockerfile
tag_prefix: "agent-ingestion-"
scout: false
- service: agent-processing
dockerfile: docker/agent-processing.Dockerfile
tag_prefix: "agent-processing-"
scout: false
- service: agent-synthesis
dockerfile: docker/agent-synthesis.Dockerfile
tag_prefix: "agent-synthesis-"
scout: false
- service: claude-code
dockerfile: docker/claude-code.Dockerfile
tag_prefix: "claude-code-"
Expand Down
27 changes: 5 additions & 22 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,20 +16,18 @@ Read the following before writing a single line:
- `CLAUDE.md` (this file), `README.md`
- `docker-compose.yml`, `Makefile`, `.env.example`
- `blhackbox/mcp/server.py` — blhackbox stdio MCP server (Claude Code Web path)
- `mcp_servers/ollama_mcp_server.py` — Ollama MCP orchestrator (optional, `--profile ollama`)
- Every file directly relevant to the task: the relevant `Dockerfile`, `*_server.py`, `*_agent.py`, agent prompts in `blhackbox/prompts/agents/` — whatever applies
- Every file directly relevant to the task: the relevant `Dockerfile`, `*_server.py` — whatever applies
- Do not rely on memory from previous sessions. Read the actual current files.

**Phase 3: Understand Before Acting**
Before writing code, answer these internally:
1. What is the root cause — not the symptom, the actual root cause?
2. Does the fix conflict with anything else in the codebase?
3. Does it break the `AggregatedPayload` schema contract? (Must stay stable for `aggregate_results`, report generation, and the optional Ollama pipeline)
3. Does it break the `AggregatedPayload` schema contract? (Must stay stable for `aggregate_results` and report generation)
4. Does it violate the `shell=False` rule?
5. Am I touching agent prompts in `blhackbox/prompts/agents/`? If so — do I need a rebuild, or can I use a volume mount override?
6. Is there a simpler fix that achieves the same result?
5. Is there a simpler fix that achieves the same result?

Only after answering all six — write the fix.
Only after answering all five — write the fix.

---

Expand All @@ -39,8 +37,7 @@ Claude Desktop, or ChatGPT) IS the orchestrator — it decides which tools to ca
collects raw outputs, and structures them directly into an `AggregatedPayload` via
the `aggregate_results` MCP tool before writing the final pentest report.

The Ollama preprocessing pipeline (3 agents) is now optional (`--profile ollama`)
for local-only / offline processing. By default, the MCP host handles aggregation.
The MCP host handles all data aggregation directly.

## Code Standards
- All Python code must be type-annotated
Expand All @@ -60,27 +57,13 @@ for local-only / offline processing. By default, the MCP host handles aggregatio
7. Document tools in README.md components table
8. Add unit tests

## Adding or Tuning an Agent Prompt (Optional Ollama Pipeline)
Agent prompts are in `blhackbox/prompts/agents/` (only relevant if using `--profile ollama`):
- `ingestionagent.md` — Ingestion Agent system prompt
- `processingagent.md` — Processing Agent system prompt
- `synthesisagent.md` — Synthesis Agent system prompt

**To tune without rebuilding:** Mount the file as a volume in `docker-compose.yml`.
**To make it permanent:** Edit the `.md` file and rebuild the relevant image.

Always validate that the `AggregatedPayload` Pydantic model still parses correctly
after prompt changes (`make test`).

## Key Reference Links
| Resource | URL |
|----------|-----|
| FastMCP (Python MCP framework) | https://pypi.org/project/fastmcp |
| MCP Protocol spec | https://modelcontextprotocol.io |
| MCP Gateway | https://hub.docker.com/r/docker/mcp-gateway |
| Ollama Python SDK | https://github.com/ollama/ollama-python |
| Portainer CE | https://docs.portainer.io |
| NVIDIA Container Toolkit | https://docs.nvidia.com/datacenter/cloud-native/container-toolkit |
| Docker Hub (blhackbox) | https://hub.docker.com/r/crhacky/blhackbox |

## Verification Document — Authorization for Pentesting
Expand Down
79 changes: 6 additions & 73 deletions DOCKER.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,22 +19,15 @@ All custom images are published to a single Docker Hub repository, differentiate

## Images and Tags

Eight custom images are published to `crhacky/blhackbox` on Docker Hub:
Four custom images are published to `crhacky/blhackbox` on Docker Hub:

| Service | Tag | Dockerfile | Base |
|---|---|---|---|
| **Kali MCP** | `crhacky/blhackbox:kali-mcp` | `docker/kali-mcp.Dockerfile` | `kalilinux/kali-rolling` |
| **WireMCP** | `crhacky/blhackbox:wire-mcp` | `docker/wire-mcp.Dockerfile` | `debian:bookworm-slim` |
| **Screenshot MCP** | `crhacky/blhackbox:screenshot-mcp` | `docker/screenshot-mcp.Dockerfile` | `python:3.13-slim` |
| **Ollama MCP** | `crhacky/blhackbox:ollama-mcp` | `docker/ollama-mcp.Dockerfile` | `python:3.13-slim` |
| **Agent: Ingestion** | `crhacky/blhackbox:agent-ingestion` | `docker/agent-ingestion.Dockerfile` | `python:3.13-slim` |
| **Agent: Processing** | `crhacky/blhackbox:agent-processing` | `docker/agent-processing.Dockerfile` | `python:3.13-slim` |
| **Agent: Synthesis** | `crhacky/blhackbox:agent-synthesis` | `docker/agent-synthesis.Dockerfile` | `python:3.13-slim` |
| **Claude Code** | `crhacky/blhackbox:claude-code` | `docker/claude-code.Dockerfile` | `node:22-slim` |

Custom-built locally (no pre-built image on Docker Hub):
- `crhacky/blhackbox:ollama` — wraps `ollama/ollama:latest` with auto-pull entrypoint (`docker/ollama.Dockerfile`)

Official images pulled directly (no custom build):
- `portainer/portainer-ce:latest` — Docker management UI
- `docker/mcp-gateway:latest` — MCP Gateway (optional, `--profile gateway`)
Expand Down Expand Up @@ -63,15 +56,6 @@ Claude Code ──┬──> Kali MCP (SSE, port 9001)
│ After collecting raw outputs, Claude structures them directly:
│ get_payload_schema() → parse/dedup/correlate → aggregate_results()
└──> (optional) Ollama MCP (SSE, port 9000)
├──► agent-ingestion:8001
├──► agent-processing:8002
└──► agent-synthesis:8003
Ollama (LLM backend)

output/ Host-mounted directory for reports, screenshots, sessions
Portainer Docker UI (https://localhost:9443)
Expand All @@ -86,10 +70,6 @@ Claude Desktop ──> MCP Gateway (localhost:8080/mcp) ──┬──> Kali MC
└──> Screenshot MCP
```

> **Ollama is optional since v2.1.** The MCP host (Claude) now handles data
> aggregation directly. The Ollama pipeline is kept as an optional fallback
> for local-only / offline processing. Enable with `--profile ollama`.

---

## Usage
Expand Down Expand Up @@ -152,11 +132,6 @@ make health # MCP server health check
| `claude-code` | `crhacky/blhackbox:claude-code` | - | `claude-code` | Claude Code CLI client (Docker) |
| `mcp-gateway` | `docker/mcp-gateway:latest` | `8080` | `gateway` | Single MCP entry point (host clients) |
| `neo4j` | `neo4j:5` | `7474` `7687` | `neo4j` | Cross-session knowledge graph |
| `ollama-mcp` | `crhacky/blhackbox:ollama-mcp` | `9000` | `ollama` | Thin MCP orchestrator (optional) |
| `agent-ingestion` | `crhacky/blhackbox:agent-ingestion` | `8001` | `ollama` | Agent 1: parse raw output (optional) |
| `agent-processing` | `crhacky/blhackbox:agent-processing` | `8002` | `ollama` | Agent 2: deduplicate, compress (optional) |
| `agent-synthesis` | `crhacky/blhackbox:agent-synthesis` | `8003` | `ollama` | Agent 3: assemble payload (optional) |
| `ollama` | `crhacky/blhackbox:ollama` (built locally) | `11434` | `ollama` | LLM inference backend (optional) |

---

Expand All @@ -171,8 +146,7 @@ The Claude Code container's `.mcp.json` connects directly to each server:
"mcpServers": {
"kali": { "type": "sse", "url": "http://kali-mcp:9001/sse" },
"wireshark": { "type": "sse", "url": "http://kali-mcp:9003/sse" },
"screenshot": { "type": "sse", "url": "http://screenshot-mcp:9004/sse" },
"ollama-pipeline": { "type": "sse", "url": "http://ollama-mcp:9000/sse" }
"screenshot": { "type": "sse", "url": "http://screenshot-mcp:9004/sse" }
}
}
```
Expand Down Expand Up @@ -201,7 +175,6 @@ Requires `--profile gateway` (`make up-gateway`).
| Variable | Default | Description |
|---|---|---|
| `ANTHROPIC_API_KEY` | - | Required for Claude Code in Docker |
| `OLLAMA_MODEL` | `llama3.1:8b` | Ollama model for preprocessing agents |
| `MCP_GATEWAY_PORT` | `8080` | MCP Gateway host port (optional) |
| `MSF_TIMEOUT` | `300` | Metasploit command timeout in seconds |
| `NEO4J_URI` | `bolt://neo4j:7687` | Neo4j connection URI (optional) |
Expand Down Expand Up @@ -240,24 +213,6 @@ Requires `--profile gateway` (`make up-gateway`).
- **Entrypoint**: Screenshot MCP server (FastMCP + Playwright headless Chromium)
- **Transport**: SSE on port 9004

### Ollama MCP (`crhacky/blhackbox:ollama-mcp`)

- **Base**: `python:3.13-slim`
- **Entrypoint**: `ollama_mcp_server.py`
- **Transport**: SSE on port 9000
- **Role**: Thin MCP orchestrator (built with FastMCP) — calls 3 agent containers via HTTP, does NOT call Ollama directly
- **NOT an official Ollama product**

### Agent Containers (`agent-ingestion`, `agent-processing`, `agent-synthesis`)

- **Base**: `python:3.13-slim`
- **Entrypoint**: FastAPI server (`uvicorn`)
- **Ports**: 8001, 8002, 8003 respectively (internal only)
- **Depends on**: Ollama container (each calls Ollama via the official `ollama` Python package)
- **Health endpoint**: `GET /health` — returns immediately without calling Ollama
- Prompts baked in from `blhackbox/prompts/agents/` at build time
- Can be overridden via volume mount for tuning without rebuilding

### Claude Code (`crhacky/blhackbox:claude-code`)

- **Base**: `node:22-slim`
Expand Down Expand Up @@ -286,7 +241,6 @@ Named volumes for persistent data:

| Volume | Service | Purpose |
|---|---|---|
| `ollama_models` | ollama | Ollama model storage (optional) |
| `neo4j_data` | neo4j | Neo4j graph database (optional) |
| `neo4j_logs` | neo4j | Neo4j logs (optional) |
| `portainer_data` | portainer | Portainer configuration |
Expand All @@ -304,20 +258,18 @@ Host bind mounts for output (accessible on your local filesystem):

## CI/CD Pipeline

Eight custom images are built and pushed to Docker Hub via GitHub Actions:
Four custom images are built and pushed to Docker Hub via GitHub Actions:

```
PR opened ───> CI (lint + test + pip-audit)
PR merged ───> CI ───> Build & Push (8 images) ───> Docker Hub
PR merged ───> CI ───> Build & Push (4 images) ───> Docker Hub
(on CI success)
Tag v* ──────────────> Build & Push (8 images) ───> Docker Hub
Tag v* ──────────────> Build & Push (4 images) ───> Docker Hub

Manual ──────────────> Build & Push (8 images) ───> Docker Hub
Manual ──────────────> Build & Push (4 images) ───> Docker Hub
```

Docker Scout vulnerability scanning runs on the ollama-mcp image.

---

## Useful Commands
Expand All @@ -338,15 +290,9 @@ make up-gateway
# Start with Neo4j (5 containers)
docker compose --profile neo4j up -d

# Start with Ollama pipeline (9 containers, optional)
docker compose --profile ollama up -d

# Launch Claude Code in Docker
make claude-code

# Pull the Ollama model (only if using --profile ollama)
make ollama-pull

# Check health of all MCP servers
make health

Expand All @@ -366,24 +312,11 @@ make clean # also removes volumes

---

## GPU Support

GPU acceleration is **disabled by default** for broad compatibility. Ollama runs
on CPU out of the box.

If you have an NVIDIA GPU, uncomment the `deploy` block under the `ollama`
service in `docker-compose.yml` and install the
[NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)
on the host. GPU acceleration significantly speeds up Ollama inference.

---

## Security

- **Docker socket**: MCP Gateway (optional) and Portainer mount `/var/run/docker.sock`. This grants effective root on the host. Never expose ports 8080 or 9443 to the public internet.
- **Authorization**: Ensure you have written permission before scanning any target.
- **Neo4j**: Set a strong password in `.env`. Never use defaults in production.
- **Agent containers**: Communicate only on the internal `blhackbox_net` Docker network. No ports exposed to host.
- **Portainer**: Uses HTTPS with a self-signed certificate. Create a strong admin password on first run.

**This tool is for authorized security testing only.** Unauthorized access to computer systems is illegal.
Expand Down
Loading
Loading