A Claude Code plugin that tells you why your site won't get cited by AI — and fixes the boring discoverability files automatically.
170+ rules. 10 scanners. URL + directory scanning. Auto-fix for robots.txt, sitemap.xml, llms.txt, and JSON-LD.
Built by Kailesk Khumar, solo founder of houseofmvps.com
One indie hacker. One plugin. Every search engine covered.
claude-rank works as a full Claude Code plugin with skills, agents, and slash commands.
Option A — Install from GitHub (recommended):
/plugin marketplace add Houseofmvps/claude-rank
/plugin install claude-rank@Houseofmvps-claude-rank
Option B — Install from a local clone:
git clone https://github.com/Houseofmvps/claude-rank.gitThen in Claude Code:
/plugin marketplace add ./claude-rank
/plugin install claude-rank@claude-rank
After installing, run /reload-plugins to activate in your current session.
Once installed, use slash commands:
/claude-rank:rank # Smart routing — detects what your project needs
/claude-rank:rank-audit # Full 10-scanner audit with auto-fix + GSC action plan
/claude-rank:rank-geo # Deep AI search optimization audit
/claude-rank:rank-aeo # Answer engine optimization audit
/claude-rank:rank-fix # Auto-fix all findings in one command
/claude-rank:rank-schema # Detect, validate, generate, inject JSON-LD
/claude-rank:rank-compete # Competitive X-Ray — compare vs any competitor URL
/claude-rank:rank-citability # AI Citability Score — 7-dimension analysis
/claude-rank:rank-content # Content intelligence analysis
/claude-rank:rank-perf # Performance risk assessment
/claude-rank:rank-vertical # E-Commerce / Local Business SEO
/claude-rank:rank-security # Security headers audit
Zero configuration. claude-rank reads your project structure and self-configures.
npx @houseofmvps/claude-rank scan ./my-project # Local directory
npx @houseofmvps/claude-rank scan https://example.com # Live URL (crawls up to 50 pages)
npx @houseofmvps/claude-rank geo https://example.com # GEO audit on any URL
npx @houseofmvps/claude-rank aeo https://example.com # AEO audit on any URL
npx @houseofmvps/claude-rank citability ./my-project # AI citability score
npx @houseofmvps/claude-rank content ./my-project # Content intelligence
npx @houseofmvps/claude-rank keyword ./my-project # Keyword clustering
npx @houseofmvps/claude-rank brief ./my-project "seo" # Content brief
npx @houseofmvps/claude-rank perf https://example.com # Performance audit on any URL
npx @houseofmvps/claude-rank vertical ./my-project # E-commerce / local SEO
npx @houseofmvps/claude-rank security https://example.com # Security audit on any URL
npx @houseofmvps/claude-rank compete https://comp.com . # Competitive X-Ray
npx @houseofmvps/claude-rank gsc ./gsc-export.csv # GSC data analysis
npx @houseofmvps/claude-rank schema ./my-project # Structured data
npx @houseofmvps/claude-rank scan . --report html # Agency-ready HTML report
npx @houseofmvps/claude-rank scan . --threshold 80 # CI/CD mode
npx @houseofmvps/claude-rank scan . --json # Raw JSON outputnpm install -g @houseofmvps/claude-rank # scoped (official)
npm install -g claude-rank-seo # unscoped (shorter)
claude-rank scan ./my-projectBoth packages are identical.
claude-rank-seois an unscoped alias for easiernpxusage.
You shipped your SaaS. Traffic is flat. You Google your product name — page 3. You ask ChatGPT about your niche — your site isn't mentioned. Perplexity doesn't cite you. Google AI Overviews skips you entirely.
Most SEO tools check title tags and call it a day. They don't know that:
- AI search engines are replacing traditional search — and your content isn't optimized for them
- Featured snippets and voice search have completely different optimization rules than regular SEO
- Your robots.txt is blocking GPTBot, PerplexityBot, and ClaudeBot — AI can't cite what it can't crawl
- You don't have an llms.txt — the file AI assistants look for to understand your project
- Your structured data is missing or broken — you're invisible to rich results
That's not an SEO problem. That's a visibility problem across every search surface that exists in 2026.
/claude-rank:rank-audit
One command. Ten scanners run in parallel — SEO, GEO, AEO, AI Citability, Content Intelligence, Keyword Clustering, Performance, Vertical SEO, Security, and Content Brief. 170+ rules checked. Every finding gets an automated fix. Score tracked over time. Then it tells you exactly what to do in Google Search Console and Bing Webmaster Tools.
SEO Score: 87/100 ████████████░░ (54 rules)
GEO Score: 92/100 █████████████░ (45 rules + E-E-A-T)
AEO Score: 78/100 ██████████░░░░ (12 rules)
Citability Score: 65/100 ████████░░░░░░ (7 dimensions)
Performance: 90/100 █████████████░ (20 rules)
Security: 80/100 ███████████░░░ (15 rules)
Overall: 86/100 READY TO RANK
Score below 80? Run /claude-rank:rank-fix and it auto-generates what's missing — robots.txt, sitemap.xml, llms.txt, JSON-LD schema — then re-scans to show your improvement.
Traditional search optimization. The foundation.
| Category | What it checks |
|---|---|
| Meta | Title (length, uniqueness), meta description, viewport, charset, canonical URL, lang attribute |
| Content | H1 presence, heading hierarchy, word count (<main> only), image alt text, thin content, readability (Flesch-Kincaid), passive voice |
| Technical | robots.txt, sitemap.xml, HTTPS, mobile-friendly viewport, analytics (30+ providers), redirect chains, lazy loading, hreflang |
| Structured Data | JSON-LD presence, validation against Google's required fields (14 schema types), dateModified freshness |
| Cross-Page | Duplicate titles, duplicate descriptions, duplicate content (Jaccard >80%), canonical conflicts, orphan pages, broken internal links |
Generative Engine Optimization. For AI search: ChatGPT, Perplexity, Gemini, Google AI Overviews.
| Category | What it checks |
|---|---|
| AI Crawlers | robots.txt for 11 bots: GPTBot, PerplexityBot, ClaudeBot, Claude-Web, Google-Extended, CCBot, AppleBot, Bytespider, Meta-ExternalAgent, Amazonbot, anthropic-ai |
| AI Discoverability | llms.txt, sitemap.xml, structured data quality |
| Content Structure | Question-format H2s (filters marketing headers), definition patterns, statistics, data tables, lists |
| Citation Readiness | 134-167 word passage sweet spot, direct answers in first 40-60 words, citations to .edu/.gov/.org |
| E-E-A-T | Author bio, credentials/expertise, about/team page, reviews/testimonials, external authority links |
Answer Engine Optimization. Featured snippets, People Also Ask, voice search.
| Category | What it checks |
|---|---|
| Schema | FAQPage, HowTo, speakable, Article structured data |
| Snippet Fitness | Answer paragraph length (40-60 words optimal), numbered steps, definition patterns |
| Voice Search | Concise answers under 29 words, conversational phrasing |
Proprietary scoring algorithm. Scores how likely AI engines are to cite each page (0-100).
| Dimension | Weight | What it measures |
|---|---|---|
| Statistic Density | 0-15 | Data points per 200 words |
| Front-loading | 0-15 | Key answer in first 30% of content |
| Source Citations | 0-15 | Links to .edu/.gov/research domains |
| Expert Attribution | 0-15 | Person schema, author bios, expert quotes |
| Definition Clarity | 0-10 | "X is..." / "X refers to..." extraction patterns |
| Schema Completeness | 0-15 | Organization + Author + Article + FAQ + Breadcrumb |
| Content Structure | 0-15 | Heading hierarchy, lists, paragraph segmentation |
Deep content quality analysis across all pages.
| Category | What it analyzes |
|---|---|
| Readability | Flesch-Kincaid score, Gunning Fog index, per-page scoring |
| Duplicate Detection | Jaccard similarity fingerprinting across all page pairs |
| Thin Content | Pages under 300 words flagged |
| Internal Linking | Suggests cross-links for pages sharing H2 topics |
| Orphan Pages | Pages with zero incoming internal links |
| Hub Pages | Identifies pillar pages with 5+ outgoing internal links |
| Topic Clusters | Groups pages by shared keywords |
| Category | What it analyzes |
|---|---|
| Primary Keyword | Highest-weighted keyword per page (from H1/title) |
| TF-IDF Scoring | Term frequency / inverse document frequency across your content |
| Topic Clusters | Pages grouped by 3+ shared significant keywords |
| Keyword Cannibalization | Multiple pages targeting the same primary keyword |
| Content Gaps | Keywords only covered by 1 page — opportunity for more content |
| Pillar Suggestions | When 3+ pages share a theme, suggests creating a pillar page |
Generate SEO-optimized writing briefs from your existing content.
| Category | What it generates |
|---|---|
| Suggested Title | H1 based on target keyword and existing content patterns |
| Word Count Target | Avg of related pages + 20% to outperform |
| H2 Outline | From analyzing related content structure |
| Questions to Answer | Extracted from FAQ patterns and question headings |
| Internal Links | Pages to link to/from for topical authority |
| Related Keywords | Extracted from related pages via TF-IDF |
| GEO Tips | Statistics to include, expert quotes, citation opportunities |
Performance and mobile-first indexing checks from static HTML. No Chrome needed.
| Category | What it checks |
|---|---|
| CLS Risk | Images without width/height dimensions |
| Render Blocking | Scripts without async/defer, excessive blocking scripts |
| Payload | Large inline CSS/JS (>50KB), too many external domains |
| Loading | Missing lazy loading, missing fetchpriority for LCP image |
| Fonts | Web fonts without font-display: swap |
| Images | Responsive images (srcset/sizes), modern formats (WebP/AVIF) |
| Mobile | Missing viewport meta, non-responsive viewport, small tap targets (<44px), small font sizes (<12px), fixed-width elements (>500px) |
Auto-detects e-commerce and local business sites, then runs specialized checks. SaaS sites with pricing pages are correctly excluded via strong/weak signal weighting.
| Type | Rules | What it checks |
|---|---|---|
| E-Commerce | 10 | Product schema, Offer schema, AggregateRating, reviews, product images, descriptions, breadcrumbs, pricing, availability, duplicate descriptions |
| Local Business | 10 | LocalBusiness schema, NAP data, geo coordinates, opening hours, Google Maps, clickable phone, local keywords, address element, service area pages |
Security compliance that directly affects SEO (Google confirmed HTTPS as a ranking signal).
| Category | What it checks |
|---|---|
| HTTPS | Mixed content, upgrade-insecure-requests |
| Headers | CSP, X-Content-Type-Options, X-Frame-Options, Referrer-Policy, Permissions-Policy |
| Integrity | Subresource Integrity (SRI) on external scripts |
| Safety | Inline event handlers, form actions over HTTP, target="_blank" noopener, iframe sandbox |
Point at any competitor URL. claude-rank fetches their page and compares everything side-by-side:
- Tech Stack — 50+ detection patterns (Wappalyzer-style): framework, CMS, CDN, analytics, payments, chat
- SEO Signals — title, meta, canonical, Open Graph, Twitter Card, structured data
- Content Depth — word count, heading structure, links
- Conversion Signals — CTAs, pricing, demo booking, social proof, waitlists (24 patterns)
- Quick Wins — gaps to close and strengths to keep
claude-rank compete https://competitor.com ./my-projectNo API keys. No rate limits. No signup. Just point and compare.
claude-rank cwv https://example.com| Metric | Good | Poor |
|---|---|---|
| LCP (Largest Contentful Paint) | < 2.5s | > 4.0s |
| CLS (Cumulative Layout Shift) | < 0.1 | > 0.25 |
| FCP (First Contentful Paint) | < 1.8s | > 3.0s |
| TBT (Total Blocking Time) | < 200ms | > 600ms |
No separate install — uses npx -y lighthouse@12 automatically. Just needs Chrome.
Every finding has a fix. Not "consider adding" — actual file generation:
| Generator | What it creates |
|---|---|
| robots.txt | AI-friendly rules allowing all 11 AI crawlers + sitemap directive |
| sitemap.xml | Auto-detected routes (Next.js App/Pages Router, static HTML) |
| llms.txt | AI discoverability file from your package.json |
| JSON-LD | 12 types: Organization, Article, Product, FAQPage, HowTo, LocalBusiness, Person, WebSite, BreadcrumbList, SoftwareApplication, VideoObject, ItemList |
Detect → Find all JSON-LD in your HTML files
Validate → Check against Google's required fields (14 schema types)
Generate → Create missing schema from your project data
Inject → Add generated schema into your HTML <head>
This is what separates claude-rank from every other SEO scanner. After fixing issues, it tells you exactly what to do next:
Google Search Console: Submit sitemap, request indexing for money pages, check coverage, validate rich results, monitor CWV.
Bing Webmaster Tools: Submit URLs (10,000/day), enable IndexNow for near-instant re-indexing, verify robots.txt (Bingbot powers Microsoft Copilot and ChatGPT Browse).
AI Search Verification: Test your brand in ChatGPT, Perplexity, Gemini. Verify llms.txt. Weekly monitoring checklist.
claude-rank scan https://example.com # Crawls up to 50 pages
claude-rank scan https://example.com --pages 10 # Limit to 10 pages
claude-rank scan https://example.com --single # Just one pageBFS crawl, 3 concurrent fetches, cross-page duplicate/canonical analysis.
claude-rank scan ./my-project --report htmlSelf-contained claude-rank-report.html — dark theme, score rings, detailed findings. No external dependencies. Ready to send to clients.
claude-rank scan ./my-project --threshold 80
# Exit code 1 if score < 80 — add to your CI pipelineEvery audit saves scores. See trends over time:
2026-03-25 SEO: 62 GEO: 45 AEO: 38
2026-03-26 SEO: 78 GEO: 72 AEO: 65 (+16, +27, +27)
2026-03-28 SEO: 87 GEO: 92 AEO: 78 (+9, +20, +13)
All scores: 0-100. Higher is better.
| Severity | Deduction | Example |
|---|---|---|
| Critical | -20 | No title tag, robots.txt blocking all crawlers |
| High | -10 | Missing meta description, no JSON-LD, AI bots blocked |
| Medium | -5 | Title too long, missing OG tags, no llms.txt |
| Low | -2 | Missing lang attribute, no analytics detected |
Same rule on multiple pages = one deduction (not N). Consistent across all 10 scanners.
| Command | Description |
|---|---|
scan ./project |
SEO scan — 54 rules |
scan https://example.com |
Crawl + scan live site (up to 50 pages) |
geo ./project or geo https://... |
GEO — AI search optimization (45 rules + E-E-A-T) |
aeo ./project or aeo https://... |
AEO — answer engine optimization (12 rules) |
citability ./project or URL |
AI Citability Score — 7 dimensions |
content ./project or URL |
Content intelligence — readability, duplicates, linking |
keyword ./project or URL |
Keyword clustering — TF-IDF, cannibalization, gaps |
brief ./project "keyword" |
Content brief generator (with search intent) |
perf ./project or URL |
Performance + mobile audit (20 rules) |
vertical ./project or URL |
Vertical SEO — e-commerce + local (20 rules) |
security ./project or URL |
Security headers audit (15 rules) |
compete https://comp.com . |
Competitive X-Ray |
gsc ./export.csv |
Google Search Console data analysis |
cwv https://example.com |
Core Web Vitals via Lighthouse |
schema ./project or URL |
Detect + validate structured data |
help |
Show available commands |
Flags: --json (raw output) | --report html (visual report) | --threshold N (CI mode) | --pages N (crawl limit) | --single (one page only)
| Feature | claude-rank | claude-seo |
|---|---|---|
| SEO rules | 54 | ~20 |
| GEO — AI search (Perplexity, ChatGPT, Gemini) | 45 rules + E-E-A-T | Basic |
| AEO — featured snippets, voice search | 12 rules | None |
| AI Citability Score (7-dimension) | Yes | No |
| Content Intelligence (readability, duplicates) | Yes | No |
| Keyword Clustering (TF-IDF) | Yes | No |
| Content Brief Generator | Yes | No |
| Performance + Mobile Audit | 20 rules | No |
| Mobile-first indexing checks | 5 rules | No |
| Vertical SEO (e-commerce + local) | Auto-detection | No |
| Security Headers Audit | Yes | No |
| Competitive X-Ray (50+ tech patterns) | Side-by-side | No |
| Core Web Vitals / Lighthouse | Yes | No |
| Schema engine (detect/validate/generate/inject) | Full CRUD | Detect only |
| Auto-fix generators (robots.txt, sitemap, llms.txt, JSON-LD) | Yes | No |
| Post-audit GSC/Bing action plans | Yes | No |
| Cross-page analysis (duplicates, orphans, canonicals) | Yes | No |
| Multi-page URL crawling | Up to 50 pages | No |
| HTML report export | Agency-ready | No |
| CI/CD threshold mode | Yes | No |
| Score tracking with trends | Yes | No |
| Broken internal link detection | Filesystem resolution | No |
| Image optimization audit (srcset, WebP/AVIF) | Yes | No |
| URL scanning for all commands | Yes | No |
| GSC CSV data integration | Yes | No |
| Search intent classification | Yes | No |
| Intent-aware cannibalization | Yes | No |
| AI bot detection | 11 bots | Basic |
claude-seo tells you what's wrong. claude-rank fixes it.
- GEO (Generative Engine Optimization) — optimization for AI-powered search engines that generate answers (Perplexity, ChatGPT Search, Gemini, Google AI Overviews). NOT geographic.
- AEO (Answer Engine Optimization) — optimization for direct answer features: featured snippets, People Also Ask, voice assistants.
- SEO (Search Engine Optimization) — traditional Google/Bing crawlability, indexability, on-page signals.
| Protection | How |
|---|---|
| No shell injection | execFileSync with array args — zero shell interpolation |
| SSRF protection | All HTTP tools block private IPs, cloud metadata, non-HTTP schemes |
| No telemetry | Zero data collection. No phone-home. Ever. |
| 1 dependency | htmlparser2 only (30KB). No native bindings. No node-gyp. |
| 372 tests | All scanners, CLI, integration, security tests |
| File safety | 10MB read cap. 5MB response cap. Restrictive write permissions. |
See SECURITY.md for the full vulnerability disclosure policy.
| Category | Count |
|---|---|
| Scanners | 10 (SEO, GEO, AEO, Citability, Content, Keywords, Briefs, Perf+Mobile, Vertical, Security) |
| Rules | 170+ across all scanners |
| Tools | 18 (scanners + GSC analyzer + schema engine + robots/sitemap/llms.txt + competitive X-ray + formatter) |
| CLI Commands | 16 (all accept URLs) |
| Agents | 9 autonomous auditors |
| Skills | 7 plugin skills |
| Tests | 372 |
- Node.js >= 18 (tested on 18, 20, 22 via CI)
- ESM environment (
"type": "module") - No build step required
- Single dependency:
htmlparser2(30KB) - Optional for Core Web Vitals: Chrome/Chromium
I built claude-rank alone — nights and weekends, between building my own SaaS products. No VC funding. No team. Just one person who got tired of being invisible to AI search and decided to fix it for everyone.
This plugin is free forever. No pro tier. No paywalls. No "upgrade to unlock." Every feature — all 10 scanners, 12 slash commands, 9 agents — is yours, completely free.
If claude-rank helped your site rank higher — one AI citation it earned you, one missing schema it generated, one robots.txt fix that unblocked GPTBot — I'd be grateful if you considered sponsoring.
— Kailesk Khumar, solo founder of houseofmvps.com
Found a bug? Want a new scanner rule? Open an issue or PR.
git clone https://github.com/Houseofmvps/claude-rank.git
cd claude-rank
npm install
npm test # 372 tests, node:test
node tools/<tool>.mjs # No build stepSee CONTRIBUTING.md for guidelines.
MIT — LICENSE. Free forever. No pro tier. No paywalls.
Every star makes this project more visible to developers who need it.