-
Notifications
You must be signed in to change notification settings - Fork 324
Running Reconnaissance
The reconnaissance pipeline is RedAmon's core scanning engine — a fully automated, parallelized process that maps your target's entire attack surface using a fan-out / fan-in architecture. Independent modules run concurrently via ThreadPoolExecutor, while data-dependent steps run sequentially. This page explains how to launch a scan, monitor its progress, and understand the results.
Make sure you have:
- A user selected (see User Management)
- A project created with a target domain or IP/CIDR targets configured (see Creating a Project)
- The Graph Dashboard open with your project selected (see The Graph Dashboard)
- On the Graph Dashboard, locate the Recon Actions group (blue) in the toolbar
- Click the "Start Recon" button
A confirmation modal appears showing:
- Your project name and target domain
- Current graph statistics (how many nodes of each type already exist, if any)

- Click "Confirm" to start the scan
The "Start Recon" button changes to a spinner while the scan is running.
Once the scan starts, a Logs button (terminal icon) appears in the Recon Actions group.
- Click the Logs button to open the Logs Drawer on the right side
- Watch the real-time output as each phase progresses

The logs drawer shows:
- Current phase with phase number (e.g., "Phase 3: HTTP Probing")
- Log messages streaming in real-time as the scan progresses
- A Clear button to reset the log display
While the reconnaissance runs, the graph canvas auto-refreshes every 5 seconds. You'll see nodes appearing and connecting in real-time:
- First, Domain and Subdomain nodes appear (GROUP 1 — parallel: WHOIS + 5 discovery tools + URLScan)
- Then IP nodes connect to subdomains (GROUP 1 — DNS with 20 parallel workers)
- ExternalDomain nodes appear from URLScan enrichment (GROUP 1)
- Port nodes attach to IPs, Shodan enrichment data merges in (GROUP 3 — parallel: Naabu + Shodan)
- BaseURL, Service, and Technology nodes appear (GROUP 4 — HTTP Probe)
- Endpoint and Parameter nodes branch out (GROUP 5 — parallel: Katana + GAU + Kiterunner)
- Vulnerability and CVE nodes connect to affected resources (GROUP 6 — Nuclei + MITRE)
When the scan completes:
- The spinner stops and the "Start Recon" button reappears
- A Download button (download icon) appears in the Recon Actions group
- Click it to download the complete results as a JSON file (
recon_{projectId}.json)
The pipeline is organized into execution groups. Modules within each group run concurrently; groups execute sequentially because later groups depend on earlier results. Graph DB updates run in a dedicated background thread so the main pipeline is never blocked.
| Group | Modules | Parallelism |
|---|---|---|
| GROUP 1 | WHOIS + Subdomain Discovery + URLScan | 3 parallel tasks |
| ↳ Discovery | crt.sh, HackerTarget, Subfinder, Amass, Knockpy | 5 parallel tools |
| ↳ DNS | DNS resolution for all subdomains | 20 parallel workers |
| GROUP 3 | Shodan + Port Scan (Naabu) | 2 parallel tasks |
| GROUP 4 | HTTP Probe (httpx) | Sequential (internally parallel) |
| GROUP 5 | Resource Enum (Katana + Hakrawler + GAU + Kiterunner + jsluice) | 4 tools parallel, then jsluice sequential |
| GROUP 6 | Vuln Scan (Nuclei) + MITRE Enrichment | Sequential |
Each phase builds on the previous group's output. You can control which modules run via the Scan Modules setting in your project configuration.
The groups below describe the domain mode pipeline (the default). When a project uses IP/CIDR mode ("Start from IP" enabled), GROUP 1 is replaced:
| Domain Mode | IP/CIDR Mode | |
|---|---|---|
| GROUP 1 | Subdomain discovery (5 tools in parallel) + DNS (20 workers) + WHOIS + URLScan | CIDR expansion → Reverse DNS (PTR) per IP → IP WHOIS |
| Graph root | Real Domain node | Mock Domain node (ip-targets.{project_id}) |
| GAU | Available | Skipped (archives index by domain) |
| GROUPs 3-6 | Unchanged | Unchanged |
In IP mode, each target IP is resolved via PTR to discover its hostname. When no PTR record exists, a mock hostname is generated (e.g., 192-168-1-1). The remaining groups (port scan through MITRE enrichment) run identically.
Purpose: Map the target's subdomain landscape. All three top-level tasks (WHOIS, discovery, URLScan) run concurrently. Within discovery, all 5 tools run in parallel.
Techniques used (all concurrent):
- Certificate Transparency via crt.sh — finds certificates issued for the domain
- HackerTarget API — passive DNS lookup
- Subfinder — passive subdomain enumeration using 50+ online sources (certificate logs, DNS databases, web archives)
- Amass — OWASP Amass subdomain enumeration using 50+ data sources (certificate logs, DNS databases, web archives, WHOIS). Supports optional active mode (zone transfers, certificate grabs) and DNS brute forcing
-
Knockpy — active subdomain brute-forcing (if
useBruteforceForSubdomainsis enabled) - WHOIS Lookup — registrar, dates, contacts, name servers (runs in parallel with discovery)
- URLScan.io — historical scan data, subdomains, IPs, TLS metadata (runs in parallel with discovery)
- DNS Resolution — A, AAAA, MX, NS, TXT, CNAME, SOA records for every discovered subdomain (20 parallel workers)
Output: Domain, Subdomain, IP, and DNSRecord nodes in the graph.
If a specific
subdomainListis configured, the pipeline skips active discovery and only resolves those subdomains (WHOIS + URLScan still run in parallel). In IP mode, this group is replaced by reverse DNS lookups and IP WHOIS — see above.
Shodan enrichment (runs in GROUP 3 alongside port scan):
- Host Lookup — OS, ISP, organization, geolocation, and known vulnerabilities per IP
- Reverse DNS — discover hostnames missed by standard enumeration
- Domain DNS — subdomain enumeration via Shodan's DNS records (paid plan required)
- Passive CVEs — extract known CVEs from host data without active scanning
URLScan.io enrichment (runs in GROUP 1 alongside discovery):
- Queries historical scan data from URLScan.io's Search API
- Discovers subdomains, IP addresses, URL paths, TLS metadata, server technologies, and domain age
- Collects external domains encountered in historical scans for situational awareness
- Works without API key (public results) or with key (higher rate limits)
ExternalDomain nodes: Throughout the pipeline, multiple modules collect out-of-scope domains (URLScan historical data, HTTP probe redirects, Katana/GAU crawling). At the end of the pipeline, these are aggregated, deduplicated, and stored as ExternalDomain nodes linked to the root Domain.
Both modules are independently toggleable in the Discovery & OSINT tab of project settings. If URLScan enrichment runs, the
urlscanprovider is automatically removed from GAU to avoid duplicate data.
Purpose: Discover open ports and enrich IPs with Shodan intelligence. Both tasks run concurrently.
Port Scan (Naabu) capabilities:
- SYN scanning (default) with CONNECT fallback
- Top-N port selection (100, 1000, or custom ranges)
- CDN/WAF detection (Cloudflare, Akamai, AWS CloudFront)
- Passive mode via Shodan InternetDB (no packets sent)
- IANA service name mapping (15,000+ entries)
Output: Port nodes linked to IP nodes. Enriched IP nodes from Shodan (OS, ISP, geolocation, passive CVEs).
Purpose: Determine which services are live and what software they run.
httpx probing:
- Status codes, content types, page titles, server headers
- TLS certificate inspection (subject, issuer, expiry, ciphers, JARM)
- Response times, word counts, line counts
Technology detection (dual engine):
- httpx built-in fingerprinting for major frameworks
- Wappalyzer second pass (6,000+ fingerprints) for CMS plugins, JS libraries, analytics tools
Banner grabbing:
- Raw socket connections for non-HTTP services (SSH, FTP, SMTP, MySQL, Redis)
- Protocol-specific probe strings for version extraction
Output: BaseURL, Service, Technology, Certificate, Header nodes.
Purpose: Discover every reachable endpoint. Four tools run simultaneously, then jsluice performs sequential JS analysis.
| Tool | Type | Description |
|---|---|---|
| Katana | Active | Web crawler following links to configurable depth, optionally with JavaScript rendering |
| Hakrawler | Active | DOM-aware web crawler via Docker, discovers links and forms |
| GAU | Passive | Queries Wayback Machine, Common Crawl, AlienVault OTX, URLScan.io for historical URLs |
| Kiterunner | Active | API brute-forcer testing REST/GraphQL route wordlists |
| jsluice | Passive | JavaScript analysis — extracts URLs, endpoints, and embedded secrets (AWS keys, API tokens, etc.) from .js files discovered by Katana/Hakrawler |
Katana, Hakrawler, GAU, and Kiterunner run in parallel. Once crawling completes, jsluice analyzes the discovered JavaScript files sequentially to extract hidden endpoints and secrets.
Results are merged, deduplicated, and classified:
- Categories: auth, file_access, api, dynamic, static, admin
- Parameter typing: id, file, search, auth_param
Output: Endpoint, Parameter, and Secret nodes linked to BaseURL nodes.
Purpose: Test discovered endpoints for security vulnerabilities.
Capabilities:
- 9,000+ community templates for known CVEs, misconfigurations, exposed panels
- DAST mode — active fuzzing with XSS, SQLi, RCE, LFI, SSRF, SSTI payloads
- Severity filtering — scan for critical, high, medium, and/or low findings
- Interactsh — out-of-band detection for blind vulnerabilities
- CVE enrichment — cross-references findings against NVD for CVSS scores
30+ custom security checks (configurable individually):
- Direct IP access, missing security headers (CSP, HSTS, etc.)
- TLS certificate expiry, DNS security (SPF, DMARC, DNSSEC, zone transfer)
- Open services (Redis no-auth, Kubernetes API, SMTP open relay)
- Insecure form actions, missing rate limiting
Output: Vulnerability and CVE nodes linked to Endpoints and Parameters.
MITRE Enrichment (runs automatically after Nuclei):
- Maps every CVE to its corresponding CWE weakness and CAPEC attack patterns
- Uses the CVE2CAPEC repository (auto-updated with 24-hour cache TTL)
- Provides attack pattern classification for every vulnerability found
Additional MITRE output: MitreData (CWE) and Capec nodes linked to CVE nodes.
Duration varies based on target size, network conditions, and scan settings:
| Target Type | Approximate Duration |
|---|---|
| Small (1-5 subdomains, few ports) | 5-15 minutes |
| Medium (10-50 subdomains) | 15-45 minutes |
| Large (100+ subdomains) | 1-3 hours |
| IP mode (single IP) | 5-10 minutes |
| IP mode (CIDR /24 = 254 hosts) | 30-90 minutes |
Key factors affecting duration:
- Bruteforce for subdomains adds significant time for large domains
- Katana depth > 2 increases crawling time exponentially
- DAST mode doubles vulnerability scanning time
- GAU with verification adds 30-60 seconds per domain
Once the scan is complete, you can:
- Explore the graph — click nodes to inspect their properties, filter by type using the bottom bar
- Switch to Data Table — view all findings in a searchable, sortable table with Excel export
- Run GVM scan — complement web-layer findings with network-level vulnerability testing (see GVM Vulnerability Scanning)
- Run GitHub Hunt — search for leaked secrets (see GitHub Secret Hunting)
- Use the AI Agent — ask the agent to analyze findings, identify attack paths, and exploit vulnerabilities (see AI Agent Guide)
- GVM Vulnerability Scanning — add network-level vulnerability testing
- AI Agent Guide — let the AI analyze and act on your findings
Getting Started
Core Workflow
Scanning & OSINT
AI & Automation
Analysis & Reporting
- Insights Dashboard
- Pentest Reports
- Attack Surface Graph
- EvoGraph — Attack Chain Evolution
- Data Export & Import
Reference & Help