1) Start with the job, not the keywordWrite one statement: “A user wants to ___ so they can ___.”Then brainstorm 20+ questions from support tickets, sales calls, Reddit/communities, and SERPs. Group them by intent:
- How-to (“how to implement…”, “steps to…”)
- Comparison (“X vs Y”, “best tools for…”, “alternatives to…”)
- Cost (“pricing, cost, ROI, time required”)
- Stats/Benchmarks (“industry averages, success rates, timelines”)
This mix mirrors what AI answers need to assemble a complete response.
2) Build the intent clusterCreate a hub page and at least 5–7 spokes:- Hub (pillar): A definitive guide with a 2–4 sentence credible answer block at the top, followed by the key sections and an FAQ.
- Spokes:
- A step-by-step how-to (with a checklist)
- X vs Y comparison (decision table + “choose if…” rules)
- “Best tools” or “alternatives” page (criteria first, then options)
- Pricing/cost/ROI explainer (with a simple calculator or formula)
- Stats/benchmarks page (your dataset + methods)
- Troubleshooting / mistakes page (short, scannable)
Interlink them in both directions with descriptive anchors. Keep each spoke ≤3 clicks from the homepage.
3) Turn pages into citation magnetsLLMs quote short, verifiable facts. Bake these into every page:- Credible answer block (top): 2–4 sentences that directly answer the question, followed by 3–5 bullets with numbers, definitions, or thresholds.
- Ordered steps and tables: models love lists and matrices they can lift.
- Methods + sources: 1 paragraph explaining how you got your numbers and 2–4 outbound citations (standards, primary research, docs).
- Authorship + freshness: named author, “last updated” date, and a short change log.
- Schema: Article + FAQPage for most; add HowTo/Product when relevant.
- UX details: fast load, clean headings, descriptive image alt text, no intrusive pop-ups.
Pro tip: Publish an “evidence page” inside each cluster—a compact resource such as a benchmark table or a public mini-dataset. These become recurring citations across engines.
- 4) A 7-day content sprint (repeat weekly)Day 1: Draft your hub’s credible answer block and outline.
- Day 2: Write the How-to spoke (add a checklist).
- Day 3: Ship the X vs Y comparison (decision table first, then narrative).
- Day 4: Publish the pricing/ROI explainer with one simple calculation.
- Day 5: Create the stats/benchmarks page with a tiny survey or scraped public data (document your method).
- Day 6: Add FAQ entries across all pages (4–6 real questions).
- Day 7: Interlink hub↔spokes, add schema, and post a short summary on LinkedIn/Reddit with 1 actionable snippet.
- 5) Example: “AI Knowledge Base Software” clusterHub: “AI Knowledge Base Software: 2025 Buyer’s Guide”
- Spokes:
- How to structure an AI-ready knowledge base (steps)
- Zendesk vs Intercom vs Freshdesk: which fits when? (decision table)
- Cost to implement (time, tools, people) with a simple calculator
- Benchmarks: deflection rates and training data sizes
- Top mistakes (and quick fixes)
- Evidence page: 100-site audit: document types, average article length, update frequency.
Each page opens with an answer block, ends with methods, and links back to the hub. That pattern alone raises citability.
6) Measurement: prove it worksBeyond rankings and clicks, add three GEO metrics:- Share of Citation (SoC): the % of AI answers that mention or link to you for a given topic.
- Citation Count by Query: which questions you’re winning (and losing).
- Named-entity mentions: how often your brand is referenced even when unlinked.
You can track these weekly with the
AI Search Visibility Tool and prioritize the spokes that are already getting picked up. For planning future clusters, the
Generative Engine Optimisation Tool view helps reveal missing questions and sources the engines prefer.
7) Checklist (copy/paste) One core JTBD → 20+ questions grouped by intent- Hub + 5–7 spokes interlinked with descriptive anchors
- Each page starts with a credible answer block (2–4 sentences + 3–5 facts)
- At least one table or checklist per page
- Methods paragraph, author, last-updated date, and mini change log
- Article + FAQ schema; add HowTo/Product where applicable
- One evidence asset per cluster (dataset, benchmark, teardown)
- Weekly review of SoC and citation count; double down on winning spokes