Crawl any site, research keywords with SerpAPI, analyse internal link structure, generate a content gap report and PDF strategy — all from the CLI in minutes. No SaaS subscription, no browser extension.
| Requirement | Version | Notes |
|---|---|---|
| Node.js | >=18 | Runtime for all pipeline scripts |
| Python | >=3.9 | Required for PDF export only |
| pip | any | Installs markdown and playwright Python packages |
| SerpAPI key | — | 250 free searches/month; each keyword costs 1 credit |
After cloning, run npm run setup — this installs all Node and Python dependencies and downloads the Playwright Chromium browser used for both crawling and PDF export.
If the Playwright Python browser was not installed by setup, run:
playwright install chromiumgit clone https://github.com/longieirl/seo-audit-template.git
cd seo-audit-template
npm run setup
npm run audit -- https://yoursite.com YOUR_SERP_API_KEYOutput lands in ./<site>-seo/. If your config.js still has placeholder keywords, the tool will automatically suggest keywords extracted from the crawled content and prompt you to pick the ones you want.
Claude Code users: the
/seo:auditskill is bundled in this repo. Runclaudein the project directory, then/seo:audit https://yoursite.com— Claude handles keywords and strategy automatically. Requires a paid Claude account.
<site>-seo/
content/
INDEX.md ← list of all crawled pages
<page-slug>.md ← one file per page
link-graph.json ← body-content internal link map
serp/
all_results.json ← all keyword data combined
<keyword>.json ← per-keyword SERP results
LINK_AUDIT.md ← orphan, weak, and overlinked page report
GAP_REPORT.md ← auto-generated content gap report
SEO_CONTENT_STRATEGY.md ← written strategy (AI mode only)
SEO_CONTENT_STRATEGY.pdf ← exported PDF
No setup needed. Open prompts/seo-audit.md, copy the prompt, replace [WEBSITE URL], and paste it into Claude, ChatGPT, Gemini, or any AI with web search. You'll get a competitor analysis, SEO gap report, and prioritised recommendations in minutes.
The /seo:audit skill is bundled in this repo at .claude/commands/seo/audit.md. Claude Code automatically loads it when you open the project — no plugin installation required.
For a fully automated run (keyword selection, strategy doc, PDF export):
The MCP server lets Claude call SerpAPI directly as a tool, without needing the API key passed as an argument.
1 — Start the MCP server (once, in a separate terminal tab):
git clone https://github.com/serpapi/serpapi-mcp.git
cd serpapi-mcp
uv sync && uv run src/server.pyThe server runs at http://localhost:8000/{YOUR_API_KEY}/mcp. Leave it running.
2 — Run the audit:
claude
/seo:audit https://yoursite.comClaude detects the MCP server, uses it for all keyword research, and completes the full pipeline automatically.
Pass the key as a second argument or set it as an environment variable:
export SERP_API_KEY=your_key_here
claude
/seo:audit https://yoursite.comBoth options require a paid Claude account (Pro, Team, or Enterprise).
Pass multiple domains to audit them in parallel — one agent per domain, all running simultaneously:
/seo:audit staydingleway.ie thehawthornroomsdingle.comIf a completed audit already exists for any domain (SEO_CONTENT_STRATEGY.md present in the output folder), you will be asked whether to re-run or skip it before any work begins.
node run.js crawl https://yoursite.com
node run.js link-audit https://yoursite.com # orphan/weak/overlinked page analysis → LINK_AUDIT.md
node run.js suggest https://yoursite.com # preview keyword suggestions
node run.js research https://yoursite.com YOUR_SERP_API_KEY
node run.js report https://yoursite.com
python3 generate_pdf.py ./yoursite-com-seo # pass the output dir explicitlyAdd --auto to the suggest or research step to skip the interactive prompt.
Settings live in config.js. CLI arguments always take precedence:
module.exports = {
siteUrl: 'https://your-client-site.com',
serpApiKey: process.env.SERP_API_KEY || 'YOUR_SERP_API_KEY',
searchCountry: 'ie', // ie, gb, us, au, etc.
searchLanguage: 'en',
keywords: [
'your keyword one', // leave as-is to trigger auto-suggest
],
outputDir: './output',
};- The crawler handles JS-rendered sites (Wix, Squarespace, etc.). If only the homepage is captured, the content still gives enough signal for keyword selection
- Re-run individual steps without re-crawling:
node run.js reportorpython3 generate_pdf.py ./<site>-seo npm run pdfuses theoutputDirinconfig.js(defaults to./output) — pass the path explicitly when using URL-derived output dirs- All
*-seo/output folders are gitignored — client data never gets committed - If
playwright install chromiumwas not run afternpm run setup, the PDF export will fail — run it once to fix
See CONTRIBUTING.md. All contributions require a DCO sign-off (git commit -s).