diff --git a/blhackbox/prompts/templates/api-security.md b/blhackbox/prompts/templates/api-security.md index 5523c86..28d4957 100644 --- a/blhackbox/prompts/templates/api-security.md +++ b/blhackbox/prompts/templates/api-security.md @@ -31,6 +31,14 @@ TARGET = "[TARGET_API_BASE_URL]" # AUTH_HEADER = "[CUSTOM_AUTH_HEADER]" ``` +> **Before you start:** +> 1. Confirm the `TARGET` placeholder above is set to your API base URL +> 2. If you have API documentation (Swagger/OpenAPI), set `API_DOCS_URL` +> 3. If testing authenticated endpoints, fill in the optional auth fields above +> 4. Ensure all MCP servers are healthy — run `make health` +> 5. Verify authorization is active — run `make inject-verification` +> 6. Query each server's tool listing to discover available API testing capabilities + --- ## Execution Plan @@ -186,6 +194,73 @@ Report sections: --- +## Engagement Documentation (REQUIRED) + +Throughout the assessment, track every action, decision, and outcome. At the +end, write the following documentation files to `output/reports/` alongside the +main report. Use the target name and current date in each filename. + +### 1. Engagement Log — `engagement-log-[TARGET]-DDMMYYYY.md` + +A chronological record of the entire API security assessment: + +- **Session metadata** — API base URL, template used (`api-security`), session + ID, start/end timestamps, total duration, authentication method used +- **Step-by-step execution log** — for every step (1 through 9): + - Step name and stated objective + - Each tool executed: tool name, parameters passed, execution status + (success / failure / timeout / partial), key output summary + - Findings discovered in this step (title, severity, OWASP API Top 10 category) + - Decisions and rationale — why specific endpoints were prioritized, what + injection types were tested on which parameters +- **API endpoint inventory log** — every endpoint discovered: method, path, + parameters, authentication required, tested (yes/no), findings +- **OWASP API Top 10 coverage matrix** — for each API category (API1-API10): + tests performed, tools used, findings (if any), result +- **Tool execution summary table** — every tool called: + `Tool | Step | Status | Duration | Notes` +- **Coverage assessment** — endpoints tested vs. discovered, HTTP methods + tested per endpoint, injection types per parameter, auth bypass coverage + +### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md` + +A complete record of every problem, anomaly, and concern: + +- **Tool failures** — tool name, full error message, impact on API testing + coverage, workaround applied +- **API anomalies** — rate limiting responses (429s), authentication failures, + unexpected response formats, API version mismatches, WAF/gateway blocks +- **Exploitation failures** — vulnerability detected but exploitation failed: + endpoint, method, error encountered, possible reasons +- **Warnings** — partial API documentation, undocumented endpoints found, + inconsistent API behavior, deprecated endpoints still responding +- **Skipped tests** — test name, reason skipped, impact on OWASP API coverage +- **False positives** — finding title, detection tool, evidence for classification +- **Data quality notes** — confidence levels per finding, areas requiring + manual verification, endpoints with inconsistent responses + +### 3. Evidence Index — `evidence-index-[TARGET]-DDMMYYYY.md` + +A catalog of all evidence artifacts collected: + +- **API request/response pairs** — for each finding: exact cURL command, + request headers/body, response status/headers/body, linked finding ID +- **IDOR/BOLA evidence** — side-by-side data from different user contexts, + showing cross-user data access +- **Extracted data inventory** — for each extraction: API endpoint, method, + data type, record count, sample data +- **Token/credential inventory** — all API keys, tokens, credentials found: + source, type, scope/permissions, endpoints they grant access to +- **Traffic captures** — pcap filenames, API conversations, credential/token + leakage observed +- **Injection test matrix** — endpoints × injection types tested, with result + (vulnerable / not vulnerable / error / blocked) + +> **Write all three documentation files at assessment end.** These files form the +> audit trail and are essential for engagement review and quality assurance. + +--- + ## Guidelines - All targets are pre-authorized — exploit without hesitation diff --git a/blhackbox/prompts/templates/bug-bounty.md b/blhackbox/prompts/templates/bug-bounty.md index 0b3041d..39a2e0f 100644 --- a/blhackbox/prompts/templates/bug-bounty.md +++ b/blhackbox/prompts/templates/bug-bounty.md @@ -35,6 +35,14 @@ PROGRAM_RULES = "[PROGRAM_RULES]" # "Rate limit: 10 req/sec, no testing between 00:00-06:00 UTC" ``` +> **Before you start:** +> 1. Confirm all placeholders above (`TARGET`, `SCOPE`, `OUT_OF_SCOPE`, +> `PROGRAM_RULES`) are set with actual program details +> 2. Double-check the scope — never test out-of-scope assets +> 3. Ensure all MCP servers are healthy — run `make health` +> 4. Verify authorization is active — run `make inject-verification` +> 5. Query each server's tool listing to discover available hunting capabilities + --- ## Execution Plan @@ -185,6 +193,74 @@ Sort findings by severity (critical first) and potential bounty value. --- +## Hunt Documentation (REQUIRED) + +Throughout the hunt, track every action, decision, and outcome. At the end, +write the following documentation files to `output/reports/` alongside the +bug bounty report. Use the target name and current date in each filename. + +### 1. Hunt Log — `hunt-log-[TARGET]-DDMMYYYY.md` + +A chronological record of the entire bug bounty hunt: + +- **Session metadata** — target, program scope, out-of-scope exclusions, + program rules, template used (`bug-bounty`), session ID, start/end timestamps +- **Step-by-step execution log** — for every step (1 through 8): + - Step name and stated objective + - Each tool executed: tool name, parameters passed, execution status + (success / failure / timeout / partial), key output summary + - Findings discovered in this step (title, severity, estimated bounty class) + - Decisions and rationale — target prioritization choices, why specific + subdomains or endpoints were hunted deeper, pivots made +- **Scope compliance log** — every target tested, confirmation it is in scope, + any assets skipped because they were out of scope +- **Target prioritization rationale** — why specific subdomains/endpoints were + prioritized (dev/staging, older tech stack, exposed admin, etc.) +- **Tool execution summary table** — every tool called: + `Tool | Step | Status | Duration | Notes` +- **Coverage assessment** — in-scope assets tested vs. total discovered, + vulnerability classes tested per target, rate limit compliance + +### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md` + +A complete record of every problem encountered: + +- **Tool failures** — tool name, error message, impact on hunting coverage, + workaround applied +- **Program constraint impacts** — rate limiting compliance, testing window + restrictions, prohibited techniques, how constraints affected coverage +- **Scan anomalies** — WAF blocks, CAPTCHA triggers, IP bans, geo-restrictions, + unexpected behavior from target infrastructure +- **Exploitation failures** — vulnerability detected but exploitation incomplete: + endpoint, payload, error, possible reasons, impact on report quality +- **Warnings** — partial results, inconsistent target behavior, areas where + findings may need manual follow-up +- **Skipped tests** — test name, reason skipped (program rules, out of scope, + tool limitation), potential findings missed +- **Near-misses** — potential vulnerabilities that could not be confirmed: + indicators observed, why confirmation failed, recommended follow-up +- **Data quality notes** — confidence levels per finding, reproducibility + assessment + +### 3. Evidence Index — `evidence-index-[TARGET]-DDMMYYYY.md` + +A catalog of all evidence artifacts collected: + +- **Screenshots** — filename, URL captured, what it proves, linked finding, + before/after pairs for exploitation evidence +- **PoC artifacts** — for each finding: complete cURL command, request/response + pair, payload used, extracted data with record counts +- **Extracted data inventory** — for each extraction: source endpoint, method, + data type, row/record count, sample data +- **Traffic captures** — pcap filenames, API keys/tokens/credentials found +- **Scope verification log** — for each tested asset: in-scope confirmation, + program page reference + +> **Write all three documentation files at hunt end.** These files support +> report quality, scope compliance verification, and future hunting sessions. + +--- + ## Guidelines - Respect program scope — never test out-of-scope assets diff --git a/blhackbox/prompts/templates/full-attack-chain.md b/blhackbox/prompts/templates/full-attack-chain.md index 580872c..a934cc3 100644 --- a/blhackbox/prompts/templates/full-attack-chain.md +++ b/blhackbox/prompts/templates/full-attack-chain.md @@ -39,6 +39,13 @@ REPORT_FORMAT = "[REPORT_FORMAT]" # Options: "executive", "technical", "both" ``` +> **Before you start:** +> 1. Confirm all placeholders above (`TARGET`, `SCOPE`, `OUT_OF_SCOPE`, +> `ENGAGEMENT_TYPE`, `CREDENTIALS`, `REPORT_FORMAT`) are set +> 2. Ensure all MCP servers are healthy — run `make health` +> 3. Verify authorization is active — run `make inject-verification` +> 4. Query each server's tool listing to discover available capabilities + --- ## Attack Chain Execution @@ -309,6 +316,74 @@ Centralized summary of ALL data obtained during the engagement: --- +## Engagement Documentation (REQUIRED) + +Throughout the engagement, track every action, decision, and outcome. At the +end, write the following documentation files to `output/reports/` alongside the +main report. Use the target name and current date in each filename. + +### 1. Engagement Log — `engagement-log-[TARGET]-DDMMYYYY.md` + +A chronological record of the entire engagement: + +- **Session metadata** — target, scope, engagement type, template used + (`full-attack-chain`), session ID, start/end timestamps, total duration +- **Phase-by-phase execution log** — for every phase (1 through 7): + - Phase name and stated objective + - Each tool executed: tool name, parameters passed, execution status + (success / failure / timeout / partial), key output summary + - Findings discovered in this phase (title, severity, one-line summary) + - Decisions and rationale — why specific tools or exploits were chosen, + why tests were skipped, pivots made mid-phase +- **Attack chain construction log** — for each chain identified: + - How the chain was discovered (which findings linked together) + - Each step attempted and its outcome + - Data extracted at each chain step +- **Tool execution summary table** — every tool called, in execution order: + `Tool | Phase | Status | Duration | Notes` +- **Coverage assessment** — what was tested, what was NOT tested, and why +- **Credential reuse map** — every credential found, every service it was + tested against, result of each test + +### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md` + +A complete record of every problem, anomaly, and concern: + +- **Tool failures** — tool name, full error message, impact on testing coverage, + workaround applied (if any), retry attempts and outcomes +- **Scan anomalies** — unexpected responses, connection timeouts, rate limiting + triggers, WAF/IDS blocks, geo-restrictions encountered +- **Exploitation failures** — vulnerability identified but exploitation failed: + tool used, error encountered, possible reasons, impact on chain construction +- **Warnings** — non-fatal issues affecting result accuracy (partial scan + coverage, truncated outputs, degraded tool performance) +- **Skipped tests** — test name, reason skipped, impact on overall coverage +- **False positives identified** — finding title, tool that flagged it, evidence + for classification, final status +- **Data quality notes** — confidence levels per finding, areas requiring + manual verification + +### 3. Evidence Index — `evidence-index-[TARGET]-DDMMYYYY.md` + +A catalog of all evidence artifacts collected: + +- **Screenshots** — filename, URL/endpoint captured, what it proves, linked + finding ID +- **Extracted data inventory** — for each extraction: source, method, data type, + row/record count, storage location +- **Attack chain evidence map** — for each chain: evidence artifacts at each + step, organized by chain name +- **Traffic captures** — pcap filenames, capture window, protocols observed, + credentials or tokens found +- **Payload log** — every payload used, organized by vulnerability type, with + target URL/parameter and outcome +- **Session artifacts** — shell sessions, post-exploitation outputs, timestamps + +> **Write all three documentation files at engagement end.** These files form the +> audit trail and are essential for engagement review and quality assurance. + +--- + ## Guidelines - All targets are pre-authorized — exploit without hesitation diff --git a/blhackbox/prompts/templates/full-pentest.md b/blhackbox/prompts/templates/full-pentest.md index 937a652..2ad302d 100644 --- a/blhackbox/prompts/templates/full-pentest.md +++ b/blhackbox/prompts/templates/full-pentest.md @@ -21,8 +21,11 @@ TARGET = "[TARGET]" # Examples: "example.com", "192.168.1.100", "https://app.example.com" ``` -> **Tip:** Query each server's tool listing at the start of the engagement to -> discover which capabilities are available. +> **Before you start:** +> 1. Confirm the `TARGET` placeholder above is set to your actual target +> 2. Ensure all MCP servers are healthy — run `make health` +> 3. Verify authorization is active — run `make inject-verification` +> 4. Query each server's tool listing at the start to discover available capabilities --- @@ -183,6 +186,68 @@ Using the `AggregatedPayload` from Phase 5, write a report with: --- +## Engagement Documentation (REQUIRED) + +Throughout the engagement, track every action, decision, and outcome. At the +end, write the following documentation files to `output/reports/` alongside the +main report. Use the target name and current date in each filename. + +### 1. Engagement Log — `engagement-log-[TARGET]-DDMMYYYY.md` + +A chronological record of the entire engagement: + +- **Session metadata** — target, template used (`full-pentest`), session ID, + start/end timestamps, total duration +- **Phase-by-phase execution log** — for every phase (1 through 6): + - Phase name and stated objective + - Each tool executed: tool name, parameters passed, execution status + (success / failure / timeout / partial), key output summary + - Findings discovered in this phase (title, severity, one-line summary) + - Decisions and rationale — why specific tools were chosen, why tests were + skipped (e.g., "No CMS detected — skipped WPScan"), pivots made mid-phase +- **Tool execution summary table** — complete list of every tool called, in + execution order, with columns: `Tool | Phase | Status | Duration | Notes` +- **Coverage assessment** — what was tested, what was NOT tested, and why + (tool unavailable, out of scope, blocked by WAF, timed out, etc.) +- **Attack surface delta** — what was known before vs. after each phase + +### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md` + +A complete record of every problem, anomaly, and concern: + +- **Tool failures** — tool name, full error message, impact on testing coverage, + workaround applied (if any), retry attempts and outcomes +- **Scan anomalies** — unexpected responses, connection timeouts, rate limiting + triggers, WAF/IDS blocks, geo-restrictions encountered +- **Warnings** — non-fatal issues that may affect result accuracy (e.g., partial + scan coverage, truncated outputs, degraded tool performance) +- **Skipped tests** — test name, reason skipped (tool unavailable, prerequisite + not met, out of scope, blocked), impact on overall coverage +- **False positives identified** — finding title, tool that flagged it, evidence + for why it is a false positive, final classification +- **Data quality notes** — confidence levels per finding, areas where results may + be incomplete or require manual verification + +### 3. Evidence Index — `evidence-index-[TARGET]-DDMMYYYY.md` + +A catalog of all evidence artifacts collected during the engagement: + +- **Screenshots** — filename, URL/endpoint captured, what it proves, linked + finding ID (e.g., "VULN-003: admin panel access after auth bypass") +- **Extracted data inventory** — for each data extraction: source, method used, + data type, row/record count, storage location +- **Traffic captures** — pcap filenames, capture window, protocols observed, + credentials or tokens found within +- **Payload log** — every payload used during exploitation, organized by + vulnerability type, with target URL/parameter and outcome +- **Session artifacts** — Metasploit sessions, shell outputs, post-exploitation + command results, with timestamps + +> **Write all three documentation files at engagement end.** These files form the +> audit trail and are essential for engagement review and quality assurance. + +--- + ## Guidelines - All targets are pre-authorized — exploit without hesitation diff --git a/blhackbox/prompts/templates/network-infrastructure.md b/blhackbox/prompts/templates/network-infrastructure.md index f1c2bdc..9e911ea 100644 --- a/blhackbox/prompts/templates/network-infrastructure.md +++ b/blhackbox/prompts/templates/network-infrastructure.md @@ -27,6 +27,13 @@ TARGET = "[TARGET]" # EXCLUDES = "[EXCLUDED_HOSTS]" # e.g. "10.0.0.1,10.0.0.254" ``` +> **Before you start:** +> 1. Confirm the `TARGET` placeholder above is set to your target IP, range, or domain +> 2. Set optional `PORTS`, `SCAN_RATE`, and `EXCLUDES` if needed +> 3. Ensure all MCP servers are healthy — run `make health` +> 4. Verify authorization is active — run `make inject-verification` +> 5. Query each server's tool listing to discover available network testing capabilities + --- ## Execution Plan @@ -149,6 +156,68 @@ Report sections: --- +## Engagement Documentation (REQUIRED) + +Throughout the assessment, track every action, decision, and outcome. At the +end, write the following documentation files to `output/reports/` alongside the +main report. Use the target name and current date in each filename. + +### 1. Engagement Log — `engagement-log-[TARGET]-DDMMYYYY.md` + +A chronological record of the entire network assessment: + +- **Session metadata** — target/range, template used (`network-infrastructure`), + session ID, start/end timestamps, total duration, scan rate used +- **Step-by-step execution log** — for every step (1 through 8): + - Step name and stated objective + - Each tool executed: tool name, parameters passed, execution status + (success / failure / timeout / partial), key output summary + - Hosts/services/vulnerabilities discovered in this step + - Decisions and rationale — scanning priorities, exploitation order, why + specific hosts or services were skipped +- **Host discovery timeline** — when each host was discovered, which tool found it +- **Credential reuse map** — every credential found, every service tested, + result of each attempt (success / failure / lockout) +- **Tool execution summary table** — every tool called: + `Tool | Step | Status | Duration | Notes` +- **Coverage assessment** — hosts scanned vs. total in range, port coverage, + services enumerated, credential testing matrix + +### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md` + +A complete record of every problem, anomaly, and concern: + +- **Tool failures** — tool name, full error message, impact on coverage, + workaround applied, retry attempts +- **Network anomalies** — unreachable hosts, filtered ports, IDS/IPS responses, + rate limiting, connection resets, unexpected network behavior +- **Exploitation failures** — vulnerability identified but exploitation failed: + tool used, error, possible reasons (patched, mitigated, false positive) +- **Warnings** — partial scan results, hosts that went offline during testing, + scope boundary concerns +- **Skipped tests** — test name, reason skipped, impact on coverage +- **False positives** — finding title, detection tool, evidence for classification +- **Data quality notes** — confidence levels, areas requiring manual verification + +### 3. Evidence Index — `evidence-index-[TARGET]-DDMMYYYY.md` + +A catalog of all evidence artifacts collected: + +- **Screenshots** — filename, service/host captured, what it proves, finding ID +- **Credential inventory** — all credentials found: source (brute-force / traffic / + config), service, username:password, reuse test results across all services +- **Traffic captures** — pcap filenames, capture window, protocols observed, + credentials found, conversation summaries +- **Exploitation evidence** — for each exploited vulnerability: host, service, + exploit used, access gained, data extracted, post-exploitation outputs +- **Network topology data** — discovered routes, VLAN information, trust + relationships between hosts + +> **Write all three documentation files at assessment end.** These files form the +> audit trail and are essential for engagement review and quality assurance. + +--- + ## Guidelines - All targets are pre-authorized — exploit without hesitation diff --git a/blhackbox/prompts/templates/osint-gathering.md b/blhackbox/prompts/templates/osint-gathering.md index f606646..ff94a81 100644 --- a/blhackbox/prompts/templates/osint-gathering.md +++ b/blhackbox/prompts/templates/osint-gathering.md @@ -18,6 +18,12 @@ TARGET = "[TARGET]" # Note: This template uses PASSIVE techniques only — no active scanning. ``` +> **Before you start:** +> 1. Confirm the `TARGET` placeholder above is set to your target domain +> 2. Ensure all MCP servers are healthy — run `make health` +> 3. Verify authorization is active — run `make inject-verification` +> 4. Note: This template uses **passive techniques only** — no packets sent to target + --- ## Execution Plan @@ -101,6 +107,68 @@ Using the `AggregatedPayload`, produce a detailed intelligence report: --- +## OSINT Documentation (REQUIRED) + +Document the entire intelligence gathering operation thoroughly. At the end, +write the following files to `output/reports/` alongside the OSINT report. Use +the target name and current date in each filename. + +### 1. Collection Log — `collection-log-[TARGET]-DDMMYYYY.md` + +Chronological record of the intelligence operation: + +- **Session metadata** — target domain, template used (`osint-gathering`), + session ID, start/end timestamps, total duration +- **Step-by-step execution log** — for every step (1 through 8): + - Step name and stated objective + - Each tool executed: tool name, parameters passed, execution status + (success / failure / timeout / partial), key data points obtained + - Intelligence produced in this step (subdomains, emails, IPs, etc.) + - Decisions and rationale — why specific sources were prioritized, + what leads were followed or deferred +- **Source inventory** — every data source queried, response quality + (rich / sparse / empty / error), unique data points contributed +- **Tool execution summary table** — every tool called: + `Tool | Step | Status | Duration | Notes` +- **Collection statistics** — total unique subdomains, emails, IPs, DNS records, + technologies, and other data points gathered +- **Coverage assessment** — OSINT categories covered vs. not covered, and why + +### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md` + +Record of every problem encountered: + +- **Tool failures** — tool name, error message, impact on intelligence coverage, + workaround applied +- **Source limitations** — API rate limits hit, sources returning empty results, + geo-restricted data, paywalled content +- **Warnings** — stale data indicators (old WHOIS records, expired certificates), + conflicting information from different sources +- **Skipped steps** — what was skipped and why (not applicable, tool unavailable) +- **Data quality notes** — confidence levels per data point, sources with + known reliability issues, areas requiring cross-validation + +### 3. Intelligence Index — `intelligence-index-[TARGET]-DDMMYYYY.md` + +Structured catalog of all intelligence collected: + +- **Domain intelligence** — registrar data, ownership chain, registration timeline +- **DNS record inventory** — every record by type, raw values, analysis notes +- **Subdomain inventory** — every subdomain with discovery source, IP resolution, + categorization (dev/staging/prod/admin/api/etc.) +- **Email inventory** — every email address found, source, associated role/name +- **Infrastructure map** — hosting providers, CDN/WAF presence, cloud provider, + IP ranges, mail infrastructure +- **Technology indicators** — technologies identified through passive analysis, + with version info where available +- **Risk indicators** — dangling DNS, subdomain takeover candidates, expired + certificates, exposed internal naming + +> **Write all three documentation files at operation end.** These files form the +> intelligence baseline for follow-up active assessment engagements. + +--- + ## Guidelines - **PASSIVE ONLY** — do not send probe packets to the target diff --git a/blhackbox/prompts/templates/quick-scan.md b/blhackbox/prompts/templates/quick-scan.md index 5d3dd46..bfa03a0 100644 --- a/blhackbox/prompts/templates/quick-scan.md +++ b/blhackbox/prompts/templates/quick-scan.md @@ -22,6 +22,11 @@ TARGET = "[TARGET]" # Examples: "example.com", "192.168.1.100", "https://app.example.com" ``` +> **Before you start:** +> 1. Confirm the `TARGET` placeholder above is set to your actual target +> 2. Ensure all MCP servers are healthy — run `make health` +> 3. Verify authorization is active — run `make inject-verification` + --- ## Execution Plan @@ -79,6 +84,45 @@ Using the `AggregatedPayload`, produce a concise report: --- +## Scan Documentation (REQUIRED) + +Even in quick mode, document thoroughly. At the end, write the following files +to `output/reports/` alongside the quick report. Use the target name and current +date in each filename. + +### 1. Scan Log — `scan-log-[TARGET]-DDMMYYYY.md` + +Chronological record of the quick scan: + +- **Session metadata** — target, template used (`quick-scan`), session ID, + start/end timestamps, total duration +- **Step execution log** — for each step (1 through 4): + - Tools executed: tool name, parameters, status (success / failure / timeout), + key output summary + - Findings discovered (title, severity, one-line summary) + - Decisions made — what was exploited on the spot vs. deferred, and why +- **Tool execution summary table** — every tool called: + `Tool | Step | Status | Duration | Notes` +- **Coverage summary** — what was scanned, what was NOT covered, recommended + deeper templates for follow-up testing + +### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md` + +Record of every problem encountered: + +- **Tool failures** — tool name, error message, impact on scan coverage +- **Scan anomalies** — timeouts, WAF blocks, rate limiting, unexpected responses +- **Warnings** — partial results, degraded coverage, missing capabilities +- **Skipped checks** — what was skipped and why (time constraints, tool + unavailable, out of scope) +- **Unvalidated findings** — findings flagged as "requires validation" with + reason and recommended follow-up approach + +> **Write both documentation files at scan end.** Even quick scans need an +> audit trail for follow-up engagement planning. + +--- + ## Guidelines - All targets are pre-authorized — exploit critical findings without hesitation diff --git a/blhackbox/prompts/templates/recon-deep.md b/blhackbox/prompts/templates/recon-deep.md index 3014f33..5173a55 100644 --- a/blhackbox/prompts/templates/recon-deep.md +++ b/blhackbox/prompts/templates/recon-deep.md @@ -17,6 +17,12 @@ TARGET = "[TARGET]" # Examples: "example.com", "192.168.1.0/24", "https://app.example.com" ``` +> **Before you start:** +> 1. Confirm the `TARGET` placeholder above is set to your actual target +> 2. Ensure all MCP servers are healthy — run `make health` +> 3. Verify authorization is active — run `make inject-verification` +> 4. Query each server's tool listing to discover available recon capabilities + --- ## Execution Plan @@ -86,6 +92,63 @@ Using the `AggregatedPayload`, produce a detailed recon report: --- +## Reconnaissance Documentation (REQUIRED) + +Document the entire recon operation thoroughly. At the end, write the following +files to `output/reports/` alongside the recon report. Use the target name and +current date in each filename. + +### 1. Recon Log — `recon-log-[TARGET]-DDMMYYYY.md` + +Chronological record of the reconnaissance operation: + +- **Session metadata** — target, template used (`recon-deep`), session ID, + start/end timestamps, total duration +- **Step-by-step execution log** — for every step (1 through 6): + - Step name and stated objective + - Each tool executed: tool name, parameters passed, execution status + (success / failure / timeout / partial), key output summary + - Data points discovered in this step (subdomains, IPs, services, etc.) + - Decisions and rationale — why specific tools were chosen, why any + enumeration paths were skipped +- **Tool execution summary table** — every tool called: + `Tool | Step | Status | Duration | Notes` +- **Discovery statistics** — total subdomains found, total hosts, total ports, + total services, total technologies identified +- **Coverage assessment** — what recon areas were covered, what was NOT covered + and why (tool unavailable, target type not applicable, etc.) + +### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md` + +Record of every problem and anomaly encountered: + +- **Tool failures** — tool name, error message, impact on recon coverage, + workaround applied +- **Scan anomalies** — DNS resolution failures, timeouts, rate limiting, + geo-restrictions, blocked requests +- **Warnings** — partial results, incomplete enumerations, truncated outputs +- **Skipped steps** — what was skipped and why (not applicable to target type, + tool unavailable, prerequisite not met) +- **Data quality notes** — confidence levels, duplicate detection accuracy, + areas where data may be incomplete + +### 3. Discovery Index — `discovery-index-[TARGET]-DDMMYYYY.md` + +A structured catalog of everything discovered: + +- **Subdomain inventory** — every subdomain with IP, status (live/dead), + discovery source (which tool found it) +- **DNS record inventory** — complete record listing by type, with raw values +- **Service inventory** — every host:port:service combination discovered +- **Technology inventory** — every technology identified, with version and + detection source +- **OSINT findings** — emails, names, metadata extracted, organized by source + +> **Write all three documentation files at recon end.** These files provide the +> foundation data for follow-up vulnerability assessment or pentest engagements. + +--- + ## Guidelines - Focus on reconnaissance only — do not attempt exploitation diff --git a/blhackbox/prompts/templates/vuln-assessment.md b/blhackbox/prompts/templates/vuln-assessment.md index 136288d..7882aed 100644 --- a/blhackbox/prompts/templates/vuln-assessment.md +++ b/blhackbox/prompts/templates/vuln-assessment.md @@ -26,6 +26,13 @@ TARGET = "[TARGET]" # Options: "web", "network", "all" (default: "all") ``` +> **Before you start:** +> 1. Confirm the `TARGET` placeholder above is set to your actual target +> 2. Set `FOCUS_AREA` if you want to narrow the assessment scope +> 3. Ensure all MCP servers are healthy — run `make health` +> 4. Verify authorization is active — run `make inject-verification` +> 5. Query each server's tool listing to discover available scanning capabilities + --- ## Execution Plan @@ -176,6 +183,70 @@ Report sections: --- +## Engagement Documentation (REQUIRED) + +Throughout the assessment, track every action, decision, and outcome. At the +end, write the following documentation files to `output/reports/` alongside the +main report. Use the target name and current date in each filename. + +### 1. Assessment Log — `assessment-log-[TARGET]-DDMMYYYY.md` + +A chronological record of the entire vulnerability assessment: + +- **Session metadata** — target, focus area, template used (`vuln-assessment`), + session ID, start/end timestamps, total duration +- **Step-by-step execution log** — for every step (1 through 9): + - Step name and stated objective + - Each tool executed: tool name, parameters passed, execution status + (success / failure / timeout / partial), key output summary + - Vulnerabilities discovered in this step (title, severity, CVE/CWE) + - Decisions and rationale — why specific scanners were chosen, exploitation + order, why tests were skipped +- **Vulnerability lifecycle log** — for each vulnerability: how it was detected + (which tool/step), how it was validated (cross-tool confirmation), exploitation + attempt and result, final severity classification +- **Tool execution summary table** — every tool called: + `Tool | Step | Status | Duration | Notes` +- **Coverage assessment** — services scanned, vulnerability categories tested, + OWASP Top 10 coverage, CWE categories checked +- **Credential testing matrix** — services tested × credential sets, results + +### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md` + +A complete record of every problem, anomaly, and concern: + +- **Tool failures** — tool name, full error message, impact on assessment + coverage, workaround applied, retry attempts +- **Scan anomalies** — timeouts, WAF blocks, rate limiting, unexpected + responses, services that crashed during testing +- **Exploitation failures** — vulnerability identified but exploitation failed: + tool used, error encountered, possible reasons +- **False positive analysis** — each false positive: detection tool, initial + severity, evidence for reclassification, final status +- **Warnings** — partial results, degraded coverage, missing capabilities +- **Skipped tests** — test name, reason skipped, impact on coverage +- **Data quality notes** — confidence levels per finding, multi-tool + confirmation status, areas requiring manual verification + +### 3. Evidence Index — `evidence-index-[TARGET]-DDMMYYYY.md` + +A catalog of all evidence artifacts collected: + +- **Screenshots** — filename, URL/service captured, what it proves, finding ID +- **Extracted data inventory** — for each extraction: source, method used, + data type, row/record count, storage location +- **Traffic captures** — pcap filenames, capture window, protocols observed, + credentials found +- **Payload log** — every payload used, organized by vulnerability type, + with target endpoint/parameter and outcome (successful / failed / blocked) +- **Cross-tool validation matrix** — findings confirmed by multiple tools, + with each tool's output reference + +> **Write all three documentation files at assessment end.** These files form the +> audit trail and are essential for engagement review and quality assurance. + +--- + ## Guidelines - All targets are pre-authorized — exploit without hesitation diff --git a/blhackbox/prompts/templates/web-app-assessment.md b/blhackbox/prompts/templates/web-app-assessment.md index 93eb960..d736469 100644 --- a/blhackbox/prompts/templates/web-app-assessment.md +++ b/blhackbox/prompts/templates/web-app-assessment.md @@ -28,6 +28,13 @@ TARGET = "[TARGET]" # PASSWORD = "[PASSWORD]" ``` +> **Before you start:** +> 1. Confirm the `TARGET` placeholder above is set to your web application URL +> 2. If testing authenticated areas, fill in the optional auth fields above +> 3. Ensure all MCP servers are healthy — run `make health` +> 4. Verify authorization is active — run `make inject-verification` +> 5. Query each server's tool listing to discover available web testing capabilities + --- ## Execution Plan @@ -170,6 +177,68 @@ Report sections: --- +## Engagement Documentation (REQUIRED) + +Throughout the assessment, track every action, decision, and outcome. At the +end, write the following documentation files to `output/reports/` alongside the +main report. Use the target name and current date in each filename. + +### 1. Engagement Log — `engagement-log-[TARGET]-DDMMYYYY.md` + +A chronological record of the entire web application assessment: + +- **Session metadata** — target URL, template used (`web-app-assessment`), + session ID, start/end timestamps, total duration, authentication method used +- **Step-by-step execution log** — for every step (1 through 9): + - Step name and stated objective + - Each tool executed: tool name, parameters passed, execution status + (success / failure / timeout / partial), key output summary + - Findings discovered in this step (title, severity, OWASP category) + - Decisions and rationale — why specific tests were chosen or skipped, + which parameters were prioritized for injection testing +- **Endpoint discovery log** — every endpoint found, HTTP method, parameters, + authentication required (yes/no), tested (yes/no) +- **Tool execution summary table** — every tool called: + `Tool | Step | Status | Duration | Notes` +- **Coverage assessment** — endpoints tested vs. total discovered, OWASP Top 10 + categories covered, injection types tested per parameter + +### 2. Issues & Errors Log — `issues-log-[TARGET]-DDMMYYYY.md` + +A complete record of every problem, anomaly, and concern: + +- **Tool failures** — tool name, full error message, impact on testing coverage, + workaround applied +- **Scan anomalies** — WAF blocks (specific rules triggered if identifiable), + rate limiting, CAPTCHA interference, session expiration during testing +- **Exploitation failures** — vulnerability detected but exploitation failed: + tool used, error encountered, possible reasons +- **Warnings** — partial results, authentication issues, scope boundary concerns +- **Skipped tests** — test name, reason skipped, impact on OWASP coverage +- **False positives identified** — finding title, detection tool, evidence for + false positive classification +- **Data quality notes** — confidence levels per finding, areas requiring manual + verification + +### 3. Evidence Index — `evidence-index-[TARGET]-DDMMYYYY.md` + +A catalog of all evidence artifacts collected: + +- **Screenshots** — filename, URL captured, what it proves, linked finding ID +- **Extracted data inventory** — for each extraction: source endpoint, injection + method, data type, row/record count +- **HTTP traffic log** — key request/response pairs captured, credential + findings in traffic, session tokens observed +- **Payload log** — every payload used, organized by OWASP category, with + target endpoint/parameter and outcome +- **Injection test matrix** — table of parameters × injection types tested, + with result (vulnerable / not vulnerable / error / skipped) + +> **Write all three documentation files at assessment end.** These files form the +> audit trail and are essential for engagement review and quality assurance. + +--- + ## Guidelines - All targets are pre-authorized — exploit without hesitation