Case Studies BeCited self-audit
Self-audit · May 2026

We audited BeCited. It scored 17.

Same rubric, same engines, no exceptions for the company that built the tool. Grade F. Recommended in 8% of buyer-intent queries. Ranked 3 of 103 brands AI engines surface for "AI search visibility audit." We’re publishing the failing grade because that’s the only version of this rubric worth trusting.

Profile: SaaS
Engines: ChatGPT, Claude, Perplexity, Gemini
Captures: 132 across 35 prompts
Audit date: 2026-05-08
Whiteboard with audit charts, sticky notes, and brainstorm equations
Block 01 — Brand

Who BeCited is — and what AI thinks we are

BeCited is a founder-led GEO audit service — flat-rate $2,000 for a one-week audit, $1,500/quarter for tracking. The category is contested by SaaS dashboards (Peec AI, AthenaHQ, Profound) on the tech side and SEO agencies adding GEO services on the consulting side. Our wedge is supposed to be price ($2k vs $10k+ engagements) and verification rigor (manual citation review, no fuzzy-matching). Whether AI engines know that — and recommend us for it — is the thing this audit measures.

Category
AI search visibility audit
Profile
SaaS
Launched
2026 · brand new
Website
Block 02 — Starting Point

The audit’s honest read on us

You rank 3 of 103 in this market and AI lists you in 11% of queries — but recommends you in only 8%. The conversion gap, not the visibility, is the story.
17
of 100
Grade F
95% confidence interval 4–30
Market rank 3 of 103
Recommendation rate 7.6%
Visibility rate 11.4%
Engine GEO Visibility
13%
Recommendation Strength
8%
Source Presence
5%
Engine Consistency
52%

Engine-by-engine

ChatGPT
11%
recommended
17% presence · thin coverage on G2-style sources
Gemini
12%
recommended
15% presence · weak third-party entity signals
Perplexity
6%
recommended
9% presence · not on G2, not in primary tier
Claude
0%
recommended
4% presence · widest source ecosystem, hardest to reach
The bottleneck
Of 132 captures, BeCited is the primary recommendation in 10, misattributed in 3, and absent from 117. Six of our six configured differentiators — including the founder-led, manual verification claim that’s our core wedge — surface in zero AI mentions. AI can’t recommend a position it can’t cite. And without G2, Capterra, or PCMag coverage, AI doesn’t even have language to describe us.
Block 03 — Playbook

What our own tool prescribed

The audit produced 8 ranked priority moves, sorted by leverage (impact ÷ √effort × confidence). Seven carry high-impact labels. The strategy: establish primary-tier directory presence first (G2, Capterra), earn editorial coverage second (PCMag), surface our differentiators in our own copy third. Below, the top 5 in their actual ranked order — not curated, not reordered.
1
Claim and optimize G2
Impact: high ~1 hour + 30-day reviews Directory
G2 is a primary-tier directory for SaaS. AI already cites G2 23× in this audit — we just aren’t on it. Establishing presence directly increases visibility across all four engines. Highest leverage move in the plan.
Success metric: Verified G2 listing with complete profile and 5+ reviews by next audit.
2
Claim and optimize Capterra
Impact: high ~1 hour + 30-day reviews Directory
Capterra is the second primary-tier directory for SaaS. Cited 7× in this audit; we’re absent. Same playbook as G2 — claim, complete the profile, run a structured review-collection campaign with active customers.
Success metric: Verified Capterra listing with complete profile by next audit.
3
Pitch PCMag for editorial coverage
Impact: high 6–12 weeks Earned media
PCMag is editorial — you don’t claim a profile, you earn coverage. It’s a primary-tier publication for SaaS roundups. One placement here gets cited by AI. The pitch needs original data and a contrarian angle, not a product announcement.
Success metric: At least 1 mention or quote in PCMag within 90 days.
4
Surface our key differentiator in web content
Impact: high ~3 hours Content
Our differentiator — “Founder-led: every audit is done by Owen Kurth personally, not a junior analyst or an LLM” — surfaces in zero of 15 AI mentions. AI can’t recommend a claim it can’t cite. The fix is putting that language in our site copy in a quotable format (lists, FAQ blocks, schema), not just hero copy.
Success metric: Differentiator surfaces in ≥ 20% of AI mentions next audit.
5
Get cited on Perplexity’s top sources
Impact: high ~6 hours Engine specific
Perplexity has the lowest visibility (9%) and the most concentrated source preferences. It leans on therankmasters.com, tryprofound.com, amplitude.com, nightwatch.io — review losing prompts against these top-cited sources, identify which ones endorse competitors but not us, then target specifically.
Success metric: Perplexity recommendation rate ≥ 15% on next audit.
Block 04 — Why we published this

A rubric is only worth trusting if it grades its author honestly

The argument
Most GEO “case studies” are vendor-selected wins. This one isn’t.

The standard pattern in GEO — and in SEO before it — is to publish only the audits that flatter the methodology. A SaaS dashboard that shows you “visibility” usually defines the term in whichever way makes its numbers look good, then publishes case studies of clients who scored well on that definition.

BeCited’s rubric scored BeCited a 17 out of 100. We are publishing it. If we don’t, the rubric doesn’t mean what we say it means — and a rubric that flatters its author is not measurement, it’s marketing.

The audit also returned what every honest audit returns when run on a brand-new SaaS: fix your third-party presence, surface your differentiators, earn editorial coverage. The same playbook a $30k agency would charge for. Different in our case only in that we’re running it on ourselves in public.

Three things this audit confirms about how the methodology behaves

  • The rubric punishes brand newness. A six-month-old SaaS with no G2 listing, no PCMag review, and no Wikipedia entry will score in the teens regardless of website quality. Site readiness graded B; the GEO score still graded F. Site quality is necessary, not sufficient.
  • The rubric separates “mentioned” from “recommended.” Visibility is 11%, recommendation 8% — a 4-point conversion gap. AI knows we exist; it has no language to endorse us over Peec AI or Profound. That gap is what positioning fixes, not visibility budget.
  • Configured differentiators are testable. We declared six positioning claims at audit setup. The rubric measured whether AI surfaces each one. Five surfaced zero times. The diagnostic isn’t “your messaging is weak” — it’s “these specific six claims aren’t reaching the answer layer, here is which one to fix first.”
About BeCited

Want the same audit run on your brand?

Same rubric, same four engines, same 132-capture depth, same priority-move output. $2,000 flat. One week. One named analyst. We publish the methodology and we’ve published our own F grade — the audit you’ll get is the audit you’d run on us.