Methodology
A single composite score out of 100, calculated from three layers. Methodology is open and inspectable so you (or your dev team) can verify what we're checking.
The composite formula
AI Visibility Score = AI Search Visibility (40) + Agent Transactability (40) + Technical Readiness (20)
Layer 1 · AI Search Visibility (40 pts)
Will ChatGPT, Perplexity, Gemini, and Claude cite you when buyers ask category questions?
- Brand citation signals (12 pts) — Organization schema present, descriptive title, meta description.
- Entity clarity (6 pts) — sameAs links to Wikipedia, Wikidata, Crunchbase, LinkedIn.
- Passage citability (8 pts) — H2/H3 density and average paragraph length. AI cites passages, not pages.
- Freshness (4 pts) — Last-modified signals. AI cites pages averaging 393 days fresher than the rest of the SERP.
- AI crawler access (4 pts) — robots.txt allow-list for GPTBot, ClaudeBot, PerplexityBot, Google-Extended, CCBot.
- /llms.txt presence (2 pts) — emerging standard for LLM-targeted content guidance.
- Dedicated LLM reference page (2 pts) — /llm-info, /ai-overview, or equivalent.
Layer 2 · Agent Transactability (40 pts)
Can AI agents on a buyer's behalf find products, fill forms, and complete actions?
- Server-side content ratio (8 pts) — text content visible in raw HTML without JavaScript.
- Structured data coverage (6 pts) — JSON-LD blocks present.
- Structured data accuracy (6 pts) — schema markup matches rendered content.
- JS-free navigation (6 pts) — <a href> links count in raw HTML.
- Critical-path completion (8 pts) — visible contact, checkout, scheduler, or buy paths.
- Form action present (4 pts) — <form action> in raw HTML.
- Agent error rate (2 pts) — HTTP 200 OK on first request.
Layer 3 · Technical Readiness (20 pts)
Foundational machine-readability layer.
- Schema breadth (4 pts) — count of distinct schema.org types.
- Sitemap freshness (3 pts) — sitemap.xml present with <lastmod> dates.
- Latency (3 pts) — sub-1500ms response time.
- Canonical URL (2 pts) — <link rel="canonical"> in head.
- Hreflang (2 pts) — correct international tags (or N/A for single-locale).
- Open Graph (2 pts) — og:title and og:description.
- WebMCP discovery (4 pts) — /.well-known/mcp.json describing agent-callable capabilities.
Severity levels
- Critical — blocks AI agents or AI search from completing primary tasks.
- Warning — degrades performance significantly.
- Quick win — low-effort fix with measurable impact.
- Info — opportunistic improvement.
What v0.1 doesn't do (yet)
This is the free, fast tier. It runs entirely from raw HTML signals and well-known metadata files in under 90 seconds with no external API calls. Coming in the paid tier:
- Live citation testing across ChatGPT, Perplexity, Gemini, Claude with real category queries
- Real Claude Managed Agent browsing the site and attempting transactions
- Full-site crawl (up to 100 URLs) with cross-page consistency checks
- Weekly monitoring with diff alerts
- Vertical benchmark comparison (your score vs other ecommerce / B2B SaaS / professional services sites)
Built on the methodology in skills/seo_ai_audit.md Step 7 (AI Search Visibility) and Step 12 (WebMCP / Agent Readiness), extended with metrics published by AgentChecker.ai (UK).