- Independent GEO research has converged on three pillars: Retrievability, Citability, and Recognizability.
- They mirror how AI engines build a response: retrieve candidates, ground claims to sources, surface a recommendation.
- BeCited's five scoring dimensions decompose the pillars: 55 points to Retrievability, 20–25 to Citability, 20–25 to Recognizability.
- A brand can fail any one pillar and remain absent from the answer. Most fail Citability first.
- Position-weighted Recommendation Strength matters: position 1 gets 1.25x; position 3 or later gets 0.85x.
"What is wrong with my AI visibility?" is rarely a single problem. Across hundreds of brand audits, the answer always decomposes into three load-bearing axes — the same three the broader GEO research community keeps converging on, regardless of methodology.
BeCited's audit framework is built around these three pillars: Retrievability, Citability, and Recognizability. They are how AI answer engines actually build a response, in order. They are also how to diagnose what is broken.
Why three pillars (and not five, or seven)
This framing is consistent across the major independent GEO studies: the Ahrefs 75,000-brand correlation analysis, the Princeton GEO paper, Muck Rack and AirOps citation studies, and Digital Bloom's answer-block research. Each study uses different metrics. They all converge on retrieval, citation grounding, and recognition as the load-bearing dimensions.
The reason is structural. Generative AI engines build an answer in three sequential steps:
- Retrieve candidate content from the open web and indexed sources.
- Ground claims to a small set of those sources, citing them.
- Recognize a recommended brand or set of brands out of the candidates.
A brand that breaks down at any one step is absent from the answer. You can have flawless brand recognition and lose because retrieval bots cannot reach your site. You can have perfect technical SEO and lose because no third-party source mentions you. The three pillars are jointly necessary.
Pillar 1: Retrievability
Retrievability
When someone asks AI about your category, does your brand show up at all?
Retrievability measures whether AI engines can find your brand in their candidate pool. It depends on crawler access (robots.txt, sitemap), index coverage, and the fundamental presence of your brand on web content the engine can fetch. Without retrievability, nothing downstream matters.
BeCited decomposes Retrievability into three measurable dimensions: visibility on engine-class platforms (ChatGPT, Perplexity, Claude, Gemini), visibility on browser-class platforms (Bing, Google AI Overviews), and consistency across engines. A brand that shows up 100% on ChatGPT but 0% on Perplexity scores poorly on consistency — you are functionally invisible to 75% of AI users.
Combined, Retrievability dimensions carry 55 of 100 GEO Score points across both SaaS and Local Service profiles. It is the largest single bucket because it is the prerequisite for everything else.
Pillar 2: Citability
Citability
Are you on the third-party sources AI engines pull from?
Citability measures presence on the domains AI engines actually cite when forming answers. The Muck Rack and AirOps studies put this at 82–91% of citations — engines lean almost entirely on third-party sources, not the brand's own website. If your category's top-cited domains do not list you, the engine has nothing to cite.
Citability is decomposed into a single dimension: Source Presence, the percentage of top-cited third-party sources where your brand is listed. BeCited builds this list empirically from your audit's own captures — every URL the AI engines cited, ranked by citation frequency. Then we check each one for whether your brand appears.
Citability carries 20 points for SaaS and 25 points for Local Service. Local services get the higher weight because directory and review platforms (Yelp, Google Business, Angi, TripAdvisor) dominate citations in vertical queries.
Most brands fail Citability first. A brand can have an A on site readiness and still be invisible if it is not on the listicles, review aggregators, and forums the engines cite. The fix usually is not more content on your site; it is earned mentions on other sites.
Pillar 3: Recognizability
Recognizability
When you do show up, are you recommended — or just one option among many?
Recognizability is the difference between being named and being chosen. AI engines often list five or more brands per answer. Being one of five with a passing reference is not the same as being the lead recommendation. Position matters: a first-listed recommendation captures dramatically more click-through than a fifth-listed one.
BeCited measures Recognizability with a position-weighted Recommendation Strength score. The math:
- For each capture where your brand is the primary recommendation, your share is 1 / total primary recommendations in that response.
- If you are the sole recommendation, your share is 1.0. One of two: 0.5. One of three: 0.33.
- Position weighting: position 1 gets a 1.25× multiplier; position 3 or later gets 0.85×.
This formula prevents "everyone wins" responses (where the AI recommends 5+ brands) from inflating your score the same way an exclusive recommendation would. It also reflects empirical reality: being first listed captures meaningfully more action than being fifth in a list of ten.
Recognizability carries 25 points for SaaS and 20 for Local Service. SaaS weights it higher because B2B buyers compare alternatives more methodically; the position of the lead recommendation drives more of the decision.
How the pillars map to scoring
| Pillar | Dimension | SaaS | Local |
|---|---|---|---|
| Retrievability | Engine Visibility | 20 | 15 |
| Retrievability | Browser Visibility | 20 | 30 |
| Retrievability | Engine Consistency | 15 | 10 |
| Citability | Source Presence | 20 | 25 |
| Recognizability | Recommendation Strength | 25 | 20 |
Weights always sum to 100. Local services weight Browser Visibility (Google AI Overviews) and Source Presence (directories) higher because that is where their buyers find them. SaaS weights Recommendation Strength and Engine Consistency higher because being the recommended choice across diverse engines is what closes B2B deals.
How to use this
The three-pillar framing turns a generic "improve our AI visibility" goal into specific work:
- Score weak on Retrievability? Start with technical: robots.txt, sitemap, schema, page accessibility for retrieval crawlers. See article two.
- Score weak on Citability? Map the top-cited domains for your category and earn placements. Editorial listicles, review profiles, community presence.
- Score weak on Recognizability? The brand exists in the answer pool but is not winning. Differentiation, positioning, and quotable claims become the lever.
Most brands fail one pillar disproportionately. The audit's job is to tell you which one. The remediation strategy follows.
Frequently asked questions
What are the three pillars of Generative Engine Optimization?
Retrievability, Citability, and Recognizability. Retrievability is whether AI engines can find your brand in their candidate pool. Citability is whether you appear on the third-party domains those engines actually cite. Recognizability is whether you are the recommended brand or just one option among many. A brand can fail any one pillar and remain absent from the answer.
Why three pillars and not five or seven?
The framing is structural, not arbitrary. Generative AI engines build an answer in three sequential steps: retrieve candidate content, ground claims to a small set of those sources, and recognize a recommended brand out of the candidates. Independent GEO research from Ahrefs, Princeton, Muck Rack, AirOps, and Digital Bloom converges on the same three load-bearing dimensions.
Which pillar do most brands fail first?
Citability. A brand can have an A on site readiness and still be invisible if it is not on the listicles, review aggregators, and forums the engines cite. Roughly 82 to 91% of AI citations come from third-party sources, not the brand's own website. The fix usually is not more content on your site; it is earned mentions on the sources that engines pull from.
How do the pillars map to BeCited's scoring?
Five scoring dimensions decompose the pillars. Retrievability covers Engine Visibility, Browser Visibility, and Engine Consistency (55 of 100 points across SaaS and Local Service profiles combined). Citability covers Source Presence (20 points for SaaS, 25 for Local). Recognizability covers Recommendation Strength (25 for SaaS, 20 for Local). Profile-specific weights reflect where buyers actually find each business type.
What is position-weighted Recommendation Strength?
Being the first-listed recommendation matters more than being fifth in a list of ten. BeCited weights position 1 with a 1.25x multiplier and position 3 or later with 0.85x. Within each capture where the brand is a primary recommendation, the share is 1 divided by the total primary recommendations in that response: a sole recommendation scores 1.0; one of two scores 0.5; one of three scores 0.33. The position weighting is then applied on top.
How do you fix a weak Citability score?
Map the top-cited domains for your category from your audit's source analysis, then earn placements on the highest-impact ones. Common targets include category listicles (G2, Capterra, Software Advice for SaaS; Yelp, Angi, BBB for local), editorial coverage on industry publications, review aggregator profiles, and active community presence on Reddit or niche forums where AI engines pull answers from.
The framework names the three pillars. The audit tells you which one is broken.
Every BeCited audit scores Retrievability, Citability, and Recognizability across ChatGPT, Gemini, Perplexity, and Claude. Results come with 95% confidence intervals and a calibrated mention rubric (Cohen's κ = 0.722) so the score can withstand scrutiny.
Run Free Site Scan See $2k Full AuditSources cited. The three-pillar framing is BeCited's synthesis of independent GEO research, including the Ahrefs 75,000-brand correlation analysis, the Princeton GEO paper, Muck Rack and AirOps citation studies, and Digital Bloom's answer-block research. The 82–91% third-party citation rate is the consolidated range across Muck Rack and AirOps. Position-weighting multipliers (1.25x first / 0.85x third+) and dimension weights (Retrievability 55 / Citability 20–25 / Recognizability 20–25) reflect BeCited's audit methodology and are documented in CLAUDE.md and site/js/scoring.js.