Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 75 additions & 0 deletions blhackbox/prompts/templates/api-security.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,14 @@ TARGET = "[TARGET_API_BASE_URL]"
# AUTH_HEADER = "[CUSTOM_AUTH_HEADER]"
```

> **Before you start:**
> 1. Confirm the `TARGET` placeholder above is set to your API base URL
> 2. If you have API documentation (Swagger/OpenAPI), set `API_DOCS_URL`
> 3. If testing authenticated endpoints, fill in the optional auth fields above
> 4. Ensure all MCP servers are healthy — run `make health`
> 5. Verify authorization is active — run `make inject-verification`
> 6. Query each server's tool listing to discover available API testing capabilities

---

## Execution Plan
Expand Down Expand Up @@ -186,6 +194,73 @@ Report sections:

---

## Engagement Documentation (REQUIRED)

Throughout the assessment, track every action, decision, and outcome. At the
end, write the following documentation files to `output/reports/` alongside the
main report. Use the target name and current date in each filename.

### 1. Engagement Log — `engagement-log-[TARGET]-DDMMYYYY.md`

A chronological record of the entire API security assessment:

- **Session metadata** — API base URL, template used (`api-security`), session
ID, start/end timestamps, total duration, authentication method used
- **Step-by-step execution log** — for every step (1 through 9):
- Step name and stated objective
- Each tool executed: tool name, parameters passed, execution status
(success / failure / timeout / partial), key output summary
- Findings discovered in this step (title, severity, OWASP API Top 10 category)
- Decisions and rationale — why specific endpoints were prioritized, what
injection types were tested on which parameters
- **API endpoint inventory log** — every endpoint discovered: method, path,
parameters, authentication required, tested (yes/no), findings
- **OWASP API Top 10 coverage matrix** — for each API category (API1-API10):
tests performed, tools used, findings (if any), result
- **Tool execution summary table** — every tool called:
`Tool | Step | Status | Duration | Notes`
- **Coverage assessment** — endpoints tested vs. discovered, HTTP methods
tested per endpoint, injection types per parameter, auth bypass coverage

### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md`

A complete record of every problem, anomaly, and concern:

- **Tool failures** — tool name, full error message, impact on API testing
coverage, workaround applied
- **API anomalies** — rate limiting responses (429s), authentication failures,
unexpected response formats, API version mismatches, WAF/gateway blocks
- **Exploitation failures** — vulnerability detected but exploitation failed:
endpoint, method, error encountered, possible reasons
- **Warnings** — partial API documentation, undocumented endpoints found,
inconsistent API behavior, deprecated endpoints still responding
- **Skipped tests** — test name, reason skipped, impact on OWASP API coverage
- **False positives** — finding title, detection tool, evidence for classification
- **Data quality notes** — confidence levels per finding, areas requiring
manual verification, endpoints with inconsistent responses

### 3. Evidence Index — `evidence-index-[TARGET]-DDMMYYYY.md`

A catalog of all evidence artifacts collected:

- **API request/response pairs** — for each finding: exact cURL command,
request headers/body, response status/headers/body, linked finding ID
- **IDOR/BOLA evidence** — side-by-side data from different user contexts,
showing cross-user data access
- **Extracted data inventory** — for each extraction: API endpoint, method,
data type, record count, sample data
- **Token/credential inventory** — all API keys, tokens, credentials found:
source, type, scope/permissions, endpoints they grant access to
- **Traffic captures** — pcap filenames, API conversations, credential/token
leakage observed
- **Injection test matrix** — endpoints × injection types tested, with result
(vulnerable / not vulnerable / error / blocked)

> **Write all three documentation files at assessment end.** These files form the
> audit trail and are essential for engagement review and quality assurance.

---

## Guidelines

- All targets are pre-authorized — exploit without hesitation
Expand Down
76 changes: 76 additions & 0 deletions blhackbox/prompts/templates/bug-bounty.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,14 @@ PROGRAM_RULES = "[PROGRAM_RULES]"
# "Rate limit: 10 req/sec, no testing between 00:00-06:00 UTC"
```

> **Before you start:**
> 1. Confirm all placeholders above (`TARGET`, `SCOPE`, `OUT_OF_SCOPE`,
> `PROGRAM_RULES`) are set with actual program details
> 2. Double-check the scope — never test out-of-scope assets
> 3. Ensure all MCP servers are healthy — run `make health`
> 4. Verify authorization is active — run `make inject-verification`
> 5. Query each server's tool listing to discover available hunting capabilities

---

## Execution Plan
Expand Down Expand Up @@ -185,6 +193,74 @@ Sort findings by severity (critical first) and potential bounty value.

---

## Hunt Documentation (REQUIRED)

Throughout the hunt, track every action, decision, and outcome. At the end,
write the following documentation files to `output/reports/` alongside the
bug bounty report. Use the target name and current date in each filename.

### 1. Hunt Log — `hunt-log-[TARGET]-DDMMYYYY.md`

A chronological record of the entire bug bounty hunt:

- **Session metadata** — target, program scope, out-of-scope exclusions,
program rules, template used (`bug-bounty`), session ID, start/end timestamps
- **Step-by-step execution log** — for every step (1 through 8):
- Step name and stated objective
- Each tool executed: tool name, parameters passed, execution status
(success / failure / timeout / partial), key output summary
- Findings discovered in this step (title, severity, estimated bounty class)
- Decisions and rationale — target prioritization choices, why specific
subdomains or endpoints were hunted deeper, pivots made
- **Scope compliance log** — every target tested, confirmation it is in scope,
any assets skipped because they were out of scope
- **Target prioritization rationale** — why specific subdomains/endpoints were
prioritized (dev/staging, older tech stack, exposed admin, etc.)
- **Tool execution summary table** — every tool called:
`Tool | Step | Status | Duration | Notes`
- **Coverage assessment** — in-scope assets tested vs. total discovered,
vulnerability classes tested per target, rate limit compliance

### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md`

A complete record of every problem encountered:

- **Tool failures** — tool name, error message, impact on hunting coverage,
workaround applied
- **Program constraint impacts** — rate limiting compliance, testing window
restrictions, prohibited techniques, how constraints affected coverage
- **Scan anomalies** — WAF blocks, CAPTCHA triggers, IP bans, geo-restrictions,
unexpected behavior from target infrastructure
- **Exploitation failures** — vulnerability detected but exploitation incomplete:
endpoint, payload, error, possible reasons, impact on report quality
- **Warnings** — partial results, inconsistent target behavior, areas where
findings may need manual follow-up
- **Skipped tests** — test name, reason skipped (program rules, out of scope,
tool limitation), potential findings missed
- **Near-misses** — potential vulnerabilities that could not be confirmed:
indicators observed, why confirmation failed, recommended follow-up
- **Data quality notes** — confidence levels per finding, reproducibility
assessment

### 3. Evidence Index — `evidence-index-[TARGET]-DDMMYYYY.md`

A catalog of all evidence artifacts collected:

- **Screenshots** — filename, URL captured, what it proves, linked finding,
before/after pairs for exploitation evidence
- **PoC artifacts** — for each finding: complete cURL command, request/response
pair, payload used, extracted data with record counts
- **Extracted data inventory** — for each extraction: source endpoint, method,
data type, row/record count, sample data
- **Traffic captures** — pcap filenames, API keys/tokens/credentials found
- **Scope verification log** — for each tested asset: in-scope confirmation,
program page reference

> **Write all three documentation files at hunt end.** These files support
> report quality, scope compliance verification, and future hunting sessions.

---

## Guidelines

- Respect program scope — never test out-of-scope assets
Expand Down
75 changes: 75 additions & 0 deletions blhackbox/prompts/templates/full-attack-chain.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,13 @@ REPORT_FORMAT = "[REPORT_FORMAT]"
# Options: "executive", "technical", "both"
```

> **Before you start:**
> 1. Confirm all placeholders above (`TARGET`, `SCOPE`, `OUT_OF_SCOPE`,
> `ENGAGEMENT_TYPE`, `CREDENTIALS`, `REPORT_FORMAT`) are set
> 2. Ensure all MCP servers are healthy — run `make health`
> 3. Verify authorization is active — run `make inject-verification`
> 4. Query each server's tool listing to discover available capabilities

---

## Attack Chain Execution
Expand Down Expand Up @@ -309,6 +316,74 @@ Centralized summary of ALL data obtained during the engagement:

---

## Engagement Documentation (REQUIRED)

Throughout the engagement, track every action, decision, and outcome. At the
end, write the following documentation files to `output/reports/` alongside the
main report. Use the target name and current date in each filename.

### 1. Engagement Log — `engagement-log-[TARGET]-DDMMYYYY.md`

A chronological record of the entire engagement:

- **Session metadata** — target, scope, engagement type, template used
(`full-attack-chain`), session ID, start/end timestamps, total duration
- **Phase-by-phase execution log** — for every phase (1 through 7):
- Phase name and stated objective
- Each tool executed: tool name, parameters passed, execution status
(success / failure / timeout / partial), key output summary
- Findings discovered in this phase (title, severity, one-line summary)
- Decisions and rationale — why specific tools or exploits were chosen,
why tests were skipped, pivots made mid-phase
- **Attack chain construction log** — for each chain identified:
- How the chain was discovered (which findings linked together)
- Each step attempted and its outcome
- Data extracted at each chain step
- **Tool execution summary table** — every tool called, in execution order:
`Tool | Phase | Status | Duration | Notes`
- **Coverage assessment** — what was tested, what was NOT tested, and why
- **Credential reuse map** — every credential found, every service it was
tested against, result of each test

### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md`

A complete record of every problem, anomaly, and concern:

- **Tool failures** — tool name, full error message, impact on testing coverage,
workaround applied (if any), retry attempts and outcomes
- **Scan anomalies** — unexpected responses, connection timeouts, rate limiting
triggers, WAF/IDS blocks, geo-restrictions encountered
- **Exploitation failures** — vulnerability identified but exploitation failed:
tool used, error encountered, possible reasons, impact on chain construction
- **Warnings** — non-fatal issues affecting result accuracy (partial scan
coverage, truncated outputs, degraded tool performance)
- **Skipped tests** — test name, reason skipped, impact on overall coverage
- **False positives identified** — finding title, tool that flagged it, evidence
for classification, final status
- **Data quality notes** — confidence levels per finding, areas requiring
manual verification

### 3. Evidence Index — `evidence-index-[TARGET]-DDMMYYYY.md`

A catalog of all evidence artifacts collected:

- **Screenshots** — filename, URL/endpoint captured, what it proves, linked
finding ID
- **Extracted data inventory** — for each extraction: source, method, data type,
row/record count, storage location
- **Attack chain evidence map** — for each chain: evidence artifacts at each
step, organized by chain name
- **Traffic captures** — pcap filenames, capture window, protocols observed,
credentials or tokens found
- **Payload log** — every payload used, organized by vulnerability type, with
target URL/parameter and outcome
- **Session artifacts** — shell sessions, post-exploitation outputs, timestamps

> **Write all three documentation files at engagement end.** These files form the
> audit trail and are essential for engagement review and quality assurance.

---

## Guidelines

- All targets are pre-authorized — exploit without hesitation
Expand Down
69 changes: 67 additions & 2 deletions blhackbox/prompts/templates/full-pentest.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,11 @@ TARGET = "[TARGET]"
# Examples: "example.com", "192.168.1.100", "https://app.example.com"
```

> **Tip:** Query each server's tool listing at the start of the engagement to
> discover which capabilities are available.
> **Before you start:**
> 1. Confirm the `TARGET` placeholder above is set to your actual target
> 2. Ensure all MCP servers are healthy — run `make health`
> 3. Verify authorization is active — run `make inject-verification`
> 4. Query each server's tool listing at the start to discover available capabilities

---

Expand Down Expand Up @@ -183,6 +186,68 @@ Using the `AggregatedPayload` from Phase 5, write a report with:

---

## Engagement Documentation (REQUIRED)

Throughout the engagement, track every action, decision, and outcome. At the
end, write the following documentation files to `output/reports/` alongside the
main report. Use the target name and current date in each filename.

### 1. Engagement Log — `engagement-log-[TARGET]-DDMMYYYY.md`

A chronological record of the entire engagement:

- **Session metadata** — target, template used (`full-pentest`), session ID,
start/end timestamps, total duration
- **Phase-by-phase execution log** — for every phase (1 through 6):
- Phase name and stated objective
- Each tool executed: tool name, parameters passed, execution status
(success / failure / timeout / partial), key output summary
- Findings discovered in this phase (title, severity, one-line summary)
- Decisions and rationale — why specific tools were chosen, why tests were
skipped (e.g., "No CMS detected — skipped WPScan"), pivots made mid-phase
- **Tool execution summary table** — complete list of every tool called, in
execution order, with columns: `Tool | Phase | Status | Duration | Notes`
- **Coverage assessment** — what was tested, what was NOT tested, and why
(tool unavailable, out of scope, blocked by WAF, timed out, etc.)
- **Attack surface delta** — what was known before vs. after each phase

### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md`

A complete record of every problem, anomaly, and concern:

- **Tool failures** — tool name, full error message, impact on testing coverage,
workaround applied (if any), retry attempts and outcomes
- **Scan anomalies** — unexpected responses, connection timeouts, rate limiting
triggers, WAF/IDS blocks, geo-restrictions encountered
- **Warnings** — non-fatal issues that may affect result accuracy (e.g., partial
scan coverage, truncated outputs, degraded tool performance)
- **Skipped tests** — test name, reason skipped (tool unavailable, prerequisite
not met, out of scope, blocked), impact on overall coverage
- **False positives identified** — finding title, tool that flagged it, evidence
for why it is a false positive, final classification
- **Data quality notes** — confidence levels per finding, areas where results may
be incomplete or require manual verification

### 3. Evidence Index — `evidence-index-[TARGET]-DDMMYYYY.md`

A catalog of all evidence artifacts collected during the engagement:

- **Screenshots** — filename, URL/endpoint captured, what it proves, linked
finding ID (e.g., "VULN-003: admin panel access after auth bypass")
- **Extracted data inventory** — for each data extraction: source, method used,
data type, row/record count, storage location
- **Traffic captures** — pcap filenames, capture window, protocols observed,
credentials or tokens found within
- **Payload log** — every payload used during exploitation, organized by
vulnerability type, with target URL/parameter and outcome
- **Session artifacts** — Metasploit sessions, shell outputs, post-exploitation
command results, with timestamps

> **Write all three documentation files at engagement end.** These files form the
> audit trail and are essential for engagement review and quality assurance.

---

## Guidelines

- All targets are pre-authorized — exploit without hesitation
Expand Down
Loading
Loading