Someone types into ChatGPT: "What's the best window replacement company in Bellingham?"

Thirty seconds later, three businesses are named. One gets a glowing recommendation. One is mentioned as an alternative. One is conspicuously absent — despite having 400 five-star reviews on Google.

This isn't random. AI engines follow a systematic process to decide which businesses get recommended, and it's fundamentally different from how traditional search engines rank websites. There is no page-one equivalent. There's no algorithm you can game with backlinks. And your Google ranking has surprisingly little to do with it.

We analyzed over 400 AI-generated responses across four major engines — ChatGPT, Google Gemini, Perplexity, and Claude — to reverse-engineer how each one selects and ranks business recommendations. What we found is a five-stage signal chain, a small set of "gatekeeper" sources that control visibility, and significant disagreement between engines about who the best businesses actually are.

The Five-Stage Signal Chain

Every AI recommendation passes through five stages before it reaches the person who asked the question. A breakdown at any single stage means your business doesn't get mentioned — regardless of how strong you are at the other four.

1

Query Intent Classification

Before the engine does anything else, it decides how to answer the question. This is the most underappreciated stage because it's invisible to the end user, and it determines whether your web presence matters at all.

The split varies dramatically by engine. Claude only searches the web for roughly 60% of queries — the other 40% come entirely from training data. Google Gemini uses "Search Grounding," which queries Google Search for nearly every factual question. This means the same business can be visible on one engine and completely invisible on another, not because of content quality, but because one engine searched the web and the other didn't bother.

2

Source Selection

When the engine does search the web, it retrieves between 5 and 10 source articles to inform its response. These sources become the engine's window into your industry — and this window is far narrower than most people realize.

This is where "gatekeeper" sources become critical. If the two or three editorial sites that an engine trusts for your industry don't mention your business, you likely won't appear in the answer — no matter how good your own website is.

3

Brand Extraction

Once the engine has its source articles, it needs to extract specific business names to include in the answer. This is where many businesses silently fail. Not every brand mentioned in a source article survives into the AI's response.

The extraction filter rewards specificity. Generic descriptions of what your business does are functionally invisible to this stage of the pipeline.

4

Ranking and Framing

Surviving extraction isn't enough. The engine now assigns each brand a position in the response and frames it with language that shapes perception.

5

User Response

The final answer is delivered to the user. And here's the critical shift from traditional search: there is no click-through to optimize for.

The Gatekeeper Effect

The most surprising finding in our research is how few sources control the entire AI recommendation pipeline for any given industry. We call this the gatekeeper effect: a small number of editorial websites act as intermediaries between your business and AI engines, and if you're not featured on those sites, you are functionally invisible.

In the project management software category, the numbers are stark. Zapier was cited in 14 out of 30 Perplexity answers about PM tools, making it the source behind nearly half of all recommendations in that space. The top five cited sources accounted for roughly 40% of all citations.

The Zapier Paradox

Here's the twist: Zapier is the number one cited source in PM software, but it is never recommended as a product. Zapier is a citation powerhouse but a product ghost. This proves something important: being a cited source and being a recommended product are entirely separate signals in the AI pipeline.

What happens is this: when Perplexity or ChatGPT retrieves Zapier's "Best Project Management Tools" listicle, the brands featured favorably in that article inherit Zapier's trust. The AI doesn't recommend Zapier — it recommends the products Zapier says are good.

Key finding: Your goal isn't to become a source that AI engines cite. It's to be featured favorably in the sources that AI engines already trust for your category.

Content Types That Drive AI Citations

Not all content formats carry equal weight in the AI pipeline. Across our dataset, the breakdown was clear:

Citation Source Breakdown
Content Type Share of Citations
"Best X" listicles ~74%
Head-to-head comparisons ~15%
Review aggregator pages ~5%
Vendor comparison pages ~4%
Forums (Reddit, Quora) ~2%

Nearly three-quarters of all AI citations trace back to listicle-style content. If someone publishes a "Best Window Companies in Seattle" article and it ranks well enough for AI engines to retrieve it, the businesses listed in that article are the ones that get recommended. The businesses not listed don't exist in that engine's world.

Engine-by-Engine Behavioral Profiles

One of the most persistent misconceptions about AI search is that it's a single channel. It's not. Each engine has a distinct personality — different source preferences, different levels of opinionatedness, different stability characteristics. Optimizing for one does not automatically help you on another.

Google Gemini (AI Overviews)

Brands per response 2–4
Opinionatedness Low
Recommendation stability ~80%
Source preference Google Search index

Perplexity

Brands per response 3–5
Opinionatedness Very high
Recommendation stability ~70%
Source preference Third-party editorial

ChatGPT

Brands per response 5–8
Opinionatedness Moderate
Recommendation stability ~90%
Source preference Mixed editorial + Reddit

Claude

Brands per response 5–7
Opinionatedness Moderate
Recommendation stability ~95%
Web search rate ~60% of queries

The bottom line: These four engines are four separate markets with different source preferences, different stability profiles, and only ~11% source overlap. A strategy that works for Gemini may be invisible to Perplexity.

The Review Volume Myth

One of the most common assumptions businesses make is that review volume drives AI recommendations. More reviews, more visibility. It's intuitive — and it's wrong.

We tracked review counts on G2 (one of the major business software review platforms) against AI recommendation rates across all four engines for project management tools. The results break the assumption cleanly:

G2 Reviews vs. AI Visibility
Brand G2 Reviews AI Visibility
Smartsheet 21,442 33%
Monday.com 15,073 50%
Asana 13,321 77%
Notion 10,844 20%

Smartsheet has more G2 reviews than any other PM tool — and the lowest AI visibility rate in its cohort at 33%. Notion, despite nearly 11,000 reviews, is recommended in only 20% of AI responses. Meanwhile, Asana achieves 77% visibility with fewer reviews than either of them.

The correlation between review volume and AI recommendation rate isn't just weak — it's slightly negative in this dataset. More reviews doesn't mean more AI visibility.

What does correlate? Presence in the editorial articles that AI engines actually cite. Asana appears consistently in the "Best PM Tools" listicles from Zapier, PCMag, and Forbes Advisor — the gatekeeper sources for this category. Smartsheet, despite its massive review count, is frequently absent from those same articles. The reviews exist on G2, but the AI engines aren't looking at G2 for their recommendations — they're looking at the editorial listicles that may or may not mention G2 data.

Key finding: AI engines don't count your reviews. They read the editorial articles that may reference your reviews. The intermediary matters more than the raw signal.

What This Means for Your Business

The five-stage signal chain creates a clear set of priorities for any business that wants to be visible in AI-generated recommendations:

1. Identify your category's gatekeeper sources. Every industry has a small set of editorial sites that AI engines trust. For SaaS products, it might be Zapier, G2 editorial, and PCMag. For local services, it might be Yelp, Thumbtack, and local media. These are the sites you need to be featured on — your own website alone isn't enough.

2. Get specific and quotable. The brand extraction stage rewards concrete facts, not generic descriptions. "Serving the Pacific Northwest since 1987" survives extraction. "We provide quality service" does not. Price points, ratings, years in business, specific service areas, industry awards — these are the details that make it through the filter.

3. Think in four markets, not one. ChatGPT, Gemini, Perplexity, and Claude draw from different source pools with only ~11% overlap. A strategy that makes you visible on Gemini (SEO-heavy) may leave you invisible on Perplexity (editorial-heavy). You need to know where you're strong and where you're missing on each engine independently.

4. Own your framing. AI engines don't just decide whether to mention you — they decide how to describe you. "Best for enterprise teams" and "most affordable option" are different positioning slots. The framing in your source articles shapes the framing in AI answers. Make sure the gatekeeper sources describe you the way you want to be described.

5. Stop chasing review volume as an AI strategy. Reviews matter for trust signals and conversion, but they don't drive AI recommendations. The editorial intermediaries between your reviews and the AI engine are what matter. A glowing mention in one Zapier listicle does more for your AI visibility than 5,000 additional G2 reviews.

Find out where AI engines are recommending your competitors instead of you

A BeCited audit maps exactly which sources AI engines cite for your category, which ones mention your competitors, and where you're missing from the conversation.

Get Your Free Check