I've audited 40 B2B SaaS sites in the last six months. The same seven mistakes show up in 33 of them. They aren't exotic. None of them require a rebuild. The reason they persist isn't technical — it's sequencing.
Most companies discover GEO the same way: a CEO or founder types "best [our category]" into ChatGPT, doesn't see their company, panics, and asks marketing what's going on. Marketing then does the obvious thing: produces more content, adds some FAQ schema, maybe gets a Reddit mention or two. Three months later, citation rate hasn't moved. Either everyone gives up, or someone hires an agency that does the same things slightly faster. Same outcome.
The work isn't the problem. The order is. Here's the pattern.
1. Buyer-intent prompts have no coverage
The most common gap, by a wide margin. The prompts buyers actually type — "best CRM for solo founders," "HubSpot vs Pipedrive for a 10-person team," "CRM that integrates with HubSpot for solo founders" — don't map to any specific page on the company's site. There's a homepage, a pricing page, a generic "why us" page, and a blog with a hundred posts about generic SaaS topics. Nothing answers the prompt.
AI engines extract from pages that match the prompt's intent. If the prompt is "best CRM for solo founders" and you have a page titled exactly that, with a comparison table and an FAQ block, you get cited. If the closest thing on your site is a homepage that says "CRM for modern teams," you don't.
The fix is producing the missing pages. Comparison pages. "Best for [ICP]" pages. Problem-led pages. Done in volume — 4–8 a month — with FAQ schema attached. Every audit's top-20-fixes list includes 6–10 of these as the highest-impact items.
2. Existing pages are unstructured for extraction
The page exists. It ranks fine in Google. AI engines can't lift a clean answer from it because the answer is buried in paragraph 4, the buyer question isn't restated as a heading, and there's no FAQ block. Schema is missing or wrong. The page was written for human readers who scroll, not for LLMs that extract.
The fix is restructuring, not rewriting. The first 80–120 words after the H1 deliver a direct answer to the page's primary buyer question. Headings restate buyer questions. Tables replace prose lists wherever a comparison applies. FAQ blocks at the bottom carry FAQ schema. Entity context (what the product is, who it's for, what category it sits in) is mentioned in paragraph 1 and reinforced in alt text and JSON-LD.
Every audit identifies the top 20 pages by buyer intent and scores them 0–100 for extractability. The cheapest big-mover fixes are usually here.
3. Schema is missing or wrong
Forty audits, six instances of clean Organization + Product + FAQ + HowTo + Review schema across the buyer-funnel pages. Six. That's 15%. The other 85% have either no schema, partial schema, or schema with errors that break extraction (wrong @type, missing required fields, conflicting markup).
Schema doesn't guarantee citations. But its absence almost guarantees no citations. AI engines preferentially extract from pages with structured Q&A and structured product data. If the schema isn't there, you're competing with companies whose schema is.
4. No third-party authority surfaces
AI engines cite sources that other sources cite. That's the recursive definition of entity authority, and it's why "just write more content" only moves citation rates so far. If your brand isn't mentioned on the surfaces the AI training set already trusts — G2, Capterra, Reddit threads, podcast appearances, comparison roundups on category trade media — content alone won't lift you past a ceiling.
Most of the audits I run find the company has 10–30 G2 reviews (decent), zero recent Reddit mentions (bad), no podcast appearances in the last 12 months (bad), and a backlink profile that's entirely SEO-optimized linkbuilding rather than topical authority (bad).
The fix is earned mention work — slow, compounding, unglamorous. We earn 2–5 placements per month per retainer client across G2/Capterra (review velocity), Reddit (founder answers in /r/SaaS and category subs), trade press (comparison roundups), and podcasts. By month 6, the entity weight has shifted measurably.
5. Prompt coverage skews to the wrong layer
Even when a company has been doing "GEO" for a while, the prompts they've targeted are usually wrong. They've optimized for "what is [category]" (low intent — buyer is researching the category, not your tool) instead of "best [category] for [ICP]" (high intent — buyer is comparing 3–5 specific tools right now).
High-intent prompts have lower volume but ten times the conversion. A citation on "best CRM for solo founders" converts to a demo at roughly 4–8% — far higher than ranking #5 on "what is a CRM" converts on Google.
The audit walks through the prompt taxonomy: category prompts (low intent), comparison prompts (high intent), problem-led prompts (medium intent), and persona prompts (high intent). The roadmap rebalances coverage toward the high-intent layer.
6. No tracking, no learning
Of the 40 audits, exactly 8 had any form of citation rate tracking before the engagement started. Even those 8 mostly tracked it manually — a marketer running ChatGPT once a quarter and saving screenshots. Nobody had weekly per-engine, per-prompt tracking with a baseline trend.
Without tracking, every action is faith-based. You produce a page; you don't know if it moved citation rate. You earn a Reddit mention; you don't know if it moved citation rate. You add schema to 30 pages; you don't know if it moved citation rate. Three months later, you don't know what worked, so you keep doing more of everything, including the things that did nothing.
Tracking is non-negotiable. The Authority retainer ships weekly tracking from week 1 — it's the foundation that lets every other lever be evaluated.
7. Pipeline attribution is missing
The CEO question is always the same: "is this driving revenue?" If the answer is "citation rate is up 23%," the budget gets cut at the next planning cycle. If the answer is "15% of new opportunities last quarter were AI-influenced — here's the list," the budget grows.
Almost no one tracks AI-influenced pipeline. It requires custom CRM fields, a post-sale survey question ("how did you first hear about us?" with an "AI assistant" option), and discipline to tag opportunities consistently. Once it's wired in, the data writes itself: 8–12% AI-influenced by month 3, 15–22% by month 9 in our retainer cohort. By the time the board deck includes that number, the GEO budget question is over.
The order matters
These seven aren't a checklist. They're a sequence.
- Tracking first (otherwise you can't evaluate anything else).
- Then the audit (otherwise you don't know which of the other six are biggest for you specifically).
- Then content + schema in parallel (the production layer).
- Then earned mentions, ongoing (the authority layer that compounds).
- Then attribution (the layer that protects the budget).
Most teams do these in the wrong order — content first, tracking last. By the time they instrument tracking, six months have passed and they can't reconstruct what worked.
If you're thinking about GEO for your SaaS and don't want to stumble through the same sequence: run a free Shortlist Score to see your starting point, or book the auditto get the full prioritized list. Either way, sequence the work. That's the most underrated lesson from 40 audits.