BeCited / Case Studies / Ridgeview Window Cleaning
Local Service · Audit Case Study

The local market leader — still being skipped by ChatGPT.

Ridgeview Window Cleaning is the #1 brand in their Pacific Northwest market. AI mentioned them in 57% of buyer-intent queries — and recommended someone else most of the time. BeCited’s audit found exactly why, and what to do about it.

Audit date: Apr 1, 2026
Profile: Local service
Engines tested: ChatGPT · Claude · Perplexity
Captures: 30 (10 prompts × 3 engines)
Block 01 — Brand

Who they are

Ridgeview Window Cleaning is a 15-year-old exterior cleaning company operating in the Pacific Northwest regional market. They cover window cleaning, gutter cleaning, and pressure washing, with roof and solar panel cleaning as secondary services. They hold the highest review count in the county for window cleaning (850+ Google reviews) and the owner mentors over 100 exterior cleaning businesses nationwide — a thought-leadership angle no local competitor matches.

Category
Exterior cleaning
Market
Pacific Northwest market
Years in business
15
Website
ridgeviewwindows.example (fictional sample)
Block 02 — Starting Point

What the audit found at baseline

You’re the #1 brand in this market — but recommended in only 31% of queries. AI lists you, then recommends someone else.
57
of 100
Grade C
95% confidence interval 34–76
Market rank 1 of 19
Market average score 24.5
Profile percentile P50 (n=2)
Engine GEO Visibility
57%
Recommendation Strength
31%
Source Presence
75%
Engine Consistency
64%

Engine-by-engine

ChatGPT
20%
recommended
30% presence · weak signal
Claude
60%
recommended
80% presence · performing well
Perplexity
30%
recommended
60% presence · 30-pt mention/rec gap
The bottleneck
Across 30 captures on 3 engines, Ridgeview appears in 17 responses but is the primary recommendation in only 11. Their differentiator — “15 years in business in the Pacific Northwest” — surfaced in 0 of 17 mentions. AI can’t recommend a claim it can’t cite. The biggest single gap: 8 high-priority prompts (all location-intent) where competitors win on review-density signals or content depth.
Block 03 — Playbook

What BeCited prescribed

The audit produced 7 ranked priority moves, sorted by leverage (impact ÷ √effort × confidence). Five carry high-impact labels; total estimated effort across the playbook is about 28 hours. The strategy: tighten engine-specific gaps first, surface the unsurfaced differentiator, and convert 4 passing mentions into recommendations — the cheapest conversions in the plan. Below: the top 5 moves in their actual ranked order.
1
Fix Perplexity visibility (30% → 50% rec)
Impact: high ~3 hours Engine specific
Perplexity mentions you in 60% of responses but only recommends you in 30% — a 30-point gap. AI lists you, then recommends someone else. The fix is language on your site, not visibility.
Success metric: Perplexity recommendation rate ≥ 50% on next audit.
2
Surface your key differentiator in web content
Impact: high ~3 hours Content
The “15 years in business in the Pacific Northwest” differentiator does not appear in any AI answer (0 of 17 mentions). AI can’t recommend what it can’t cite.
Success metric: Differentiator surfaces in ≥ 20% of AI mentions in next audit.
3
Fix ChatGPT visibility (20% → 40% rec)
Impact: high ~4 hours Engine specific
ChatGPT mentions Ridgeview in only 30% of responses; the sources it cites endorse competitors. Manual review of losing prompts against ChatGPT’s top-cited domains identifies the targets.
Success metric: ChatGPT recommendation rate ≥ 40% on next audit.
4
Publish dedicated pressure washing content
Impact: high ~6 hours Content
Pressure washing performs at 11.1% recommendation rate — behind the overall average and flagged in client intake as a known pain point. Needs a dedicated service page with pricing, process, FAQs, and customer quotes.
Success metric: Pressure washing recommendation rate ≥ 40% on next audit.
5
Publish dedicated gutter cleaning content
Impact: high ~6 hours Content
Gutter cleaning performs at 22.2% recommendation rate — behind overall average. Same remedy as pressure washing: a dedicated /gutter-cleaning page with structured content and schema, targeted at known lost prompts.
Success metric: Gutter cleaning recommendation rate ≥ 40% on next audit.

Moves 6–7 (claim Facebook business page, convert 4 passing mentions to recommendations) are lower-leverage and execute after the top five. Full playbook: 7 moves, 28 hours.

Block 04 — Replicate It

What you can apply if your business looks like this

Projected impact — success metrics from the playbook
If executed in full, the playbook targets:
  • Perplexity: recommendation rate from 30% → ≥50%
  • ChatGPT: recommendation rate from 20% → ≥40%
  • Pressure washing category: recommendation rate from 11.1% → ≥40%
  • Gutter cleaning category: recommendation rate from 22.2% → ≥40%
  • Differentiator surfacing: from 0% → ≥20% of AI mentions
  • Flip targets: at least 2 of 4 passing mentions converted to primary recommendations

These are the success metrics specified in the playbook, not measured outcomes. Ridgeview’s 90-day re-audit is scheduled for July 2026; this case study will be updated with measured deltas from delta.json at that time. We don’t fabricate wins.

Six steps any local service business can run from this audit

Step 01
Separate “mentioned” from “recommended” on every engine
A 30-point gap between presence and recommendation (Ridgeview’s Perplexity profile) is the classic passing-reference problem. AI knows you exist; it has no language to justify recommending. The fix is on-page content, not link-building.
  • Pull your engine-by-engine recommendation rate (most agencies skip this).
  • If presence is high but recommendation is low, audit your own service pages for missing claims.
  • Add quotable sentences: review counts, years in business, awards, named outcomes.
Step 02
Make sure your differentiator is on your website verbatim
Ridgeview’s “15 years in business in the Pacific Northwest” surfaced in 0 of 17 AI mentions. The claim was true; it just wasn’t in a quotable form on their pages. AI engines paraphrase what they can ground. If a differentiator isn’t in your copy, it doesn’t exist for AI.
  • List your top 3 differentiators in writing.
  • Find each one verbatim on your homepage or About page. If it’s not there, add it.
  • Repeat the claim in variant form on 2–3 other pages and in your JSON-LD schema.
Step 03
Find your weakest service category and treat it like a separate landing page
Ridgeview’s pressure washing (11%) and gutter cleaning (22%) recommendation rates lagged their overall average. Both got dedicated /service pages with pricing ranges, process steps, FAQs, and customer quotes — the structured content AI engines reliably extract from.
  • Score each service’s recommendation rate independently — not just the brand average.
  • For weak services, build a dedicated page with pricing, process, FAQ, and 3–5 first-name customer quotes.
  • Add service-specific JSON-LD (LocalBusiness or Service schema with serviceArea).
Step 04
Reverse-engineer your competitors’ positioning language
Pacific Northwest ProWash’s recurring AI phrases were “100% satisfaction,” “top-rated,” and “5-star.” Ridgeview’s site lacked all three in quotable form. If a competitor owns specific phrasing in AI answers, the phrase itself is the handle — replicate or exceed it on your own pages.
  • For each top competitor, list the 3–5 phrases AI uses to describe them.
  • Identify which of those phrases you could legitimately claim too.
  • Add the equivalent claim, with proof (specific number, source, or date), to your relevant page.
Step 05
Triage your “passing reference” prompts — cheapest conversions in your playbook
Ridgeview had 4 prompts where AI named the brand without endorsing. Those don’t need new visibility — just stronger reasons to recommend. They are the highest-leverage moves in the entire plan because the discovery work is done.
  • List every prompt where you appear but aren’t recommended.
  • Read what the winning brand’s AI quote says — that’s the bar.
  • Match or exceed the winner’s claim on the relevant page (concrete number, named recognition).
Step 06
Re-audit at 90 days — track recommendation rate, not just presence
Most agencies report on visibility because it’s easier. Recommendation rate is the metric that converts. A 95% binomial CI on each dimension separates real changes from noise; without it, every fluctuation looks like progress.
  • Schedule a re-audit on the same prompts after each major content shipping cycle.
  • Track recommendation rate per engine and per category, not just an aggregate score.
  • Use binomial 95% CI to confirm real movement vs. variance.
Who this applies to
If you’re a local service business (home services, trades, professional services, hospitality) competing in a defined regional market with 5–30 named local competitors, this same pattern very likely applies. Local-service GEO is dominated by review density (Google, Yelp, Angi, Thumbtack), location-intent prompts (“best [service] in [city]”), and on-page differentiation. The Ridgeview bottleneck — mentioned-but-not-recommended on the engine that aggregates review citations — is the single most common pattern we see in this profile.
About BeCited

Want to see your numbers, not someone else’s?

BeCited is a $2,000 GEO audit service. 100–300 buying-intent prompts, 4 AI engines, 15 site-readiness checks, scored against your real competitors with 95% confidence intervals and a calibrated mention-type rubric (Cohen’s κ = 0.722). Delivered in one week by a named analyst.