- Independent GEO research has converged on three pillars: Retrievability, Citability, and Recognizability.
- They mirror how AI engines build a response: retrieve candidates, ground claims to sources, surface a recommendation.
- BeCited's five scoring dimensions decompose the pillars: 55 points to Retrievability, 20–25 to Citability, 20–25 to Recognizability.
- A brand can fail any one pillar and remain absent from the answer. Most fail Citability first.
"What's wrong with my AI visibility?" is rarely a single problem. Across hundreds of brand audits, the answer always decomposes into three load-bearing axes — the same three the broader GEO research community keeps converging on, regardless of methodology.
BeCited's audit framework is built around these three pillars: Retrievability, Citability, and Recognizability. They're how AI answer engines actually build a response, in order. They're also how to diagnose what's broken.
Why three pillars (and not five, or seven)
This framing is consistent across the major independent GEO studies: the Ahrefs 75,000-brand correlation analysis, the Princeton GEO paper, Muck Rack and AirOps citation studies, and Digital Bloom's answer-block research. Each study uses different metrics. They all converge on retrieval, citation grounding, and recognition as the load-bearing dimensions.
The reason is structural. Generative AI engines build an answer in three sequential steps:
- Retrieve candidate content from the open web and indexed sources.
- Ground claims to a small set of those sources, citing them.
- Recognize a recommended brand or set of brands out of the candidates.
A brand that breaks down at any one step is absent from the answer. You can have flawless brand recognition and lose because retrieval bots can't reach your site. You can have perfect technical SEO and lose because no third-party source mentions you. The three pillars are jointly necessary.
Pillar 1: Retrievability
Retrievability
When someone asks AI about your category, does your brand show up at all?
Retrievability measures whether AI engines can find your brand in their candidate pool. It depends on crawler access (robots.txt, sitemap), index coverage, and the fundamental presence of your brand on web content the engine can fetch. Without retrievability, nothing downstream matters.
BeCited decomposes Retrievability into three measurable dimensions: visibility on engine-class platforms (ChatGPT, Perplexity, Claude, Gemini), visibility on browser-class platforms (Bing, Google AI Overviews), and consistency across engines. A brand that shows up 100% on ChatGPT but 0% on Perplexity scores poorly on consistency — you're functionally invisible to 75% of AI users.
Combined, Retrievability dimensions carry 55 of 100 GEO Score points across both SaaS and Local Service profiles. It's the largest single bucket because it's the prerequisite for everything else.
Pillar 2: Citability
Citability
Are you on the third-party sources AI engines pull from?
Citability measures presence on the domains AI engines actually cite when forming answers. The Muck Rack and AirOps studies put this at 82–91% of citations — engines lean almost entirely on third-party sources, not the brand's own website. If your category's top-cited domains don't list you, the engine has nothing to cite.
Citability is decomposed into a single dimension: Source Presence, the percentage of top-cited third-party sources where your brand is listed. BeCited builds this list empirically from your audit's own captures — every URL the AI engines cited, ranked by citation frequency. Then we check each one for whether your brand appears.
Citability carries 20 points for SaaS and 25 points for Local Service. Local services get the higher weight because directory and review platforms (Yelp, Google Business, Angi, TripAdvisor) dominate citations in vertical queries.
Most brands fail Citability first. A brand can have an A on site readiness and still be invisible if it's not on the listicles, review aggregators, and forums the engines cite. The fix usually isn't more content on your site; it's earned mentions on other sites.
Pillar 3: Recognizability
Recognizability
When you do show up, are you recommended — or just one option among many?
Recognizability is the difference between being named and being chosen. AI engines often list five or more brands per answer. Being one of five with a passing reference is not the same as being the lead recommendation. Position matters: a first-listed recommendation captures dramatically more click-through than a fifth-listed one.
BeCited measures Recognizability with a position-weighted Recommendation Strength score. The math:
- For each capture where your brand is the primary recommendation, your share is 1 / total primary recommendations in that response.
- If you're the sole recommendation, your share is 1.0. One of two: 0.5. One of three: 0.33.
- Position weighting: position 1 gets a 1.25× multiplier; position 3 or later gets 0.85×.
This formula prevents "everyone wins" responses (where the AI recommends 5+ brands) from inflating your score the same way an exclusive recommendation would. It also reflects empirical reality: being first listed captures meaningfully more action than being fifth in a list of ten.
Recognizability carries 25 points for SaaS and 20 for Local Service. SaaS weights it higher because B2B buyers compare alternatives more methodically; the position of the lead recommendation drives more of the decision.
How the pillars map to scoring
| Pillar | Dimension | SaaS | Local |
|---|---|---|---|
| Retrievability | Engine Visibility | 20 | 15 |
| Retrievability | Browser Visibility | 20 | 30 |
| Retrievability | Engine Consistency | 15 | 10 |
| Citability | Source Presence | 20 | 25 |
| Recognizability | Recommendation Strength | 25 | 20 |
Weights always sum to 100. Local services weight Browser Visibility (Google AI Overviews) and Source Presence (directories) higher because that's where their buyers find them. SaaS weights Recommendation Strength and Engine Consistency higher because being the recommended choice across diverse engines is what closes B2B deals.
How to use this
The three-pillar framing turns a generic "improve our AI visibility" goal into specific work:
- Score weak on Retrievability? Start with technical: robots.txt, sitemap, schema, page accessibility for retrieval crawlers. See article two.
- Score weak on Citability? Map the top-cited domains for your category and earn placements. Editorial listicles, review profiles, community presence.
- Score weak on Recognizability? The brand exists in the answer pool but isn't winning. Differentiation, positioning, and quotable claims become the lever.
Most brands fail one pillar disproportionately. The audit's job is to tell you which one. The remediation strategy follows.
We measure all three pillars
Every BeCited audit scores Retrievability, Citability, and Recognizability across ChatGPT, Gemini, Perplexity, and Claude using 100–300 buying-intent prompts and 15 site-readiness checks. Results come with 95% confidence intervals and a calibrated mention rubric (Cohen's κ = 0.722) so the score can withstand scrutiny.