- "Relevance Engineering" is Mike King's term for the discipline succeeding classical SEO. It pulls from information retrieval, machine learning, UX, content strategy, and digital PR.
- The unit of work moves from the page to the passage. AI engines extract candidate passages, embed them as vectors, rank by cosine similarity, and synthesize from the highest-scoring chunks.
- One user query fans out into roughly nine synthetic subqueries (related, implicit, comparative, recent, personalized, reformulation, entity-expanded, and others) before retrieval even runs.
- BM25 keyword matching is not retired. It is fused with dense vector retrieval through reciprocal rank fusion. Tools that only do keyword analysis are blind to half the system.
- The new measurement frame: brand citations, mindshare, scroll-to-text fragment signals, and chunk-level relevance scoring. CTR and rank position are not the right instruments anymore.
The phrase "Relevance Engineering" comes out of an interview Mike King gave to Advanced Web Ranking, where he laid out a clean break from the framing of classical SEO. The argument is not that SEO is dead. It is that the work has moved into a different layer of the stack, and the tooling, the metrics, and the role itself have to move with it.
"We're not just mechanics tweaking engines. We're engineers building the actual systems."
— Mike King, iPullRank, in conversation with Advanced Web Ranking
The framing matters because of how it changes the shape of the problem. SEO, at its peak, was a discipline of inputs (keywords, links, on-page elements) targeted at one output (rank position). Relevance Engineering treats the whole pipeline as the surface area: query expansion, passage extraction, vector ranking, model synthesis, citation selection. Each stage has its own optimization target.
From pages to passages
The single most important shift is the unit of analysis. Classical SEO tools index pages and report metrics on pages. Relevance Engineering treats every page as a collection of passages, and the passages are what get retrieved.
King's example: he built a prototype using LlamaIndex and a SERP API that pulled the top 10 search results for a query, embedded them as vectors at the passage level, and generated AI Overview-style responses. The point of the prototype was to demonstrate that no commercial SEO tool tracks visibility at this level.
"There's no SEO tool out there that does that. All the existing tools are still focused on optimizing entire pages."
The implication for content strategy is direct: a page can rank well in classical SEO but produce poor passages, and a page with a single brilliant 200-word section can be cited heavily even if the rest of the page is mediocre. Optimization decisions need to happen at the section level.
The passage rule. Every claim that matters to your business should live inside a single, self-contained paragraph. If a model has to assemble three sections to find your answer, it usually picks a competitor whose answer is already assembled.
The retrieval pipeline, end to end
King describes Google's AI Mode (and structurally similar engines) as a multi-stage retrieval pipeline. The user types one thing. The system runs four stages.
Query expansion
The user's query is rewritten into many synthetic subqueries. King's count from the interview: "One query expands into a broader set, and that expanded set generates a corpus of documents." Roughly nine variants per query, including related queries, entity expansions, inferred queries, and reformulations.
Passage extraction
The expanded queries hit the index, and the system extracts candidate passages from the documents that come back. Not whole pages. Short, semantically self-contained chunks.
Comparative ranking
The candidate passages are compared against one another, often pairwise: "Given this query, which of these two passages is better?" The output is a relative ranking, not an absolute score. Vector similarity drives most of it.
Synthesis
Final passages feed into a language model that generates the answer. Citations are selected from the passages that informed the answer, not necessarily the top-ranked documents overall.
Notice what the pipeline does not contain. There is no "rank for the keyword" step. There is no concept of a "page ranking #1." The output is an answer assembled from passages, with citations that explain which sources contributed.
BM25 is not dead. But it is half of the system
One of the cleaner technical points in the interview is that classical retrieval has not gone away. BM25, the keyword-matching algorithm that has powered search for decades, is still in the pipeline. What changed is that it is fused with dense vector retrieval through reciprocal rank fusion (RRF).
The combination matters. Pure vector retrieval is bad at exact-match queries (a product SKU, a model number, a person's name). Pure BM25 is bad at semantic queries ("the best CRM for non-profits with under 50 staff"). RRF blends the two ranked lists so the system gets the strengths of both.
The 95% problem. King's observation: roughly 95% of SEO tools still do only lexical analysis. They cannot tell you how a passage performs in vector retrieval. If the only signal you can see is keyword overlap, you are missing the half of the system that determines AI visibility.
The Relevance Engineer role
The role King describes is broader than what most SEO teams currently staff. Three responsibilities stand out from the interview.
Passage-level optimization
Not "rewriting the page." Auditing each passage for self-containment, semantic clarity, and quotability. A passage that requires the previous paragraph to make sense is worse than the same passage rewritten to stand alone. Strong passages can win retrieval competitions even when the page around them is weak.
Omni-media content planning
Text, video, images, transcripts, audio, code. Each modality is a separate retrieval source. A YouTube transcript can carry your brand into a Perplexity answer. A diagram with a clear caption can be parsed into a structured fact. Planning content for one modality and ignoring the others is leaving signal on the table.
Custom tooling
The SEO-tool ecosystem has not caught up. Existing platforms do not vectorize passages, do not simulate fan-out, do not track citations across LLM outputs, and do not score chunk-level relevance. Relevance engineers either build their own tools or work with vendors that have started to. King calls this out as one of the defining features of the role: you cannot do the work with last-decade's stack.
Mentions are not enough. Context is
One sentence from the interview is worth quoting in full because it reframes a metric most teams celebrate too early.
"It's not enough for your brand to be mentioned. It has to be mentioned in a context that actually makes sense."
A brand mentioned inside a paragraph that contradicts what your buyers want to hear is worse than no mention at all. A brand mentioned inside a listicle next to direct competitors, with the competitors' value props leading and yours appearing as an afterthought, gives the AI engine a structured comparison in which you lose.
This is why measurement frameworks for GEO have to separate presence rate (mentioned at all) from recommendation rate (mentioned with positive framing) from position-weighted recommendation strength (where in the list, with what context). A single "AI mention count" does not see any of this.
What replaces CTR and rank as success metrics
The interview names four measurement frames that replace the SEO defaults.
| Old metric | New metric | Why it changes |
|---|---|---|
| Click-through rate | Brand citations in LLM outputs | The user often does not click. They take the synthesized answer. |
| Impressions | Mindshare metrics | Impression-based, similar to display advertising. Were you in the answer? |
| Traffic from search | Scroll-to-text fragment signals | When users do click, the URL fragment shows which passage drove the click. Directional signal. |
| Page rank position | Chunk-level relevance scoring | How does each passage perform in vector retrieval for its target queries? |
None of these are entirely new (mindshare was a planning concept long before AI search), but their relevance to digital marketing is new. The measurement programs that win this era are the ones that instrument for the answer, not the click.
How BeCited applies this
Our audits operationalize the relevance-engineering framing in a few specific ways.
- Passage-level scoring. The site readiness check explicitly grades quotability: paragraph length distribution, answer-first patterns, stat density, and self-contained chunks. A page can rank well on classical signals and still fail this check.
- Multi-engine capture. Every audit runs across ChatGPT, Gemini, Perplexity, and Claude in parallel. Each engine has different retrieval pipelines, so the same passage can succeed in one and fail in another.
- Position-weighted recommendation strength. Being the first brand named in a list of recommendations is worth ~1.25x the weight of being third or later. The score reflects context, not just presence.
- Source tier classification. Citations are classified into primary, secondary, and tertiary sources per category. The ratio between tiers is more diagnostic than total citation count.
The framing King put on the role of Relevance Engineer is the one we think most digital teams should be hiring against. Not "AI SEO specialist." Not "GEO expert." A practitioner who can read a retrieval pipeline, audit passages, and instrument the work end to end.
Frequently asked questions
What is Relevance Engineering?
Relevance Engineering is a discipline coined by Mike King of iPullRank that replaces classical SEO with a channel-agnostic approach drawing from information retrieval, machine learning, UX, content strategy, and digital PR. The shift is from optimizing pages for keyword position to engineering passages and entities for vector-space retrieval inside RAG (retrieval-augmented generation) pipelines.
What is the difference between BM25 and vector retrieval?
BM25 is a sparse keyword-matching algorithm that ranks documents based on term frequency and inverse document frequency. Vector retrieval is a dense approach that converts every passage into a numeric embedding and ranks by cosine similarity to the query embedding. Modern AI engines use hybrid retrieval that combines both via reciprocal rank fusion (RRF).
What is passage indexing?
Passage indexing means storing and retrieving sections of a page as independent units rather than treating the page as one document. AI engines extract candidate passages, embed them as vectors, and rank them against the query directly. A 200-word section can rank or be cited even if the rest of the page is mediocre.
What are synthetic query types?
When a user submits a query, AI search systems do not retrieve only against that one query. They generate multiple synthetic subqueries: related queries, implicit queries, comparative queries, recent queries, personalized queries, reformulation queries, and entity-expanded queries. Each subquery retrieves its own candidate set, and the results are merged.
How is success measured in Relevance Engineering?
The measurement frameworks shift from CTR and rank position to brand citations in LLM outputs, mindshare metrics (impression-based, similar to display advertising), directional traffic signals from scroll-to-text fragments, and chunk-level relevance scoring. The unit of analysis is the passage and the citation, not the page and the click.
What does a Relevance Engineer actually do?
The role moves beyond keyword research and link building into building custom tools (existing SEO platforms do not support passage-level tracking), planning omni-media content (text, video, images, transcripts), and engineering context for brand mentions. The shorthand: it's not enough for your brand to be mentioned, it has to be mentioned in a context that actually makes sense.
Stop guessing how your passages perform in retrieval pipelines.
BeCited audits score passage quotability, capture mentions across four engines, and tag every citation with source tier and position weight. The relevance-engineering frame, instrumented.
Run Free Site Scan See $2k Full AuditSources cited. The "Relevance Engineering" framing, the passage-level critique, the multi-stage retrieval pipeline, the BM25-versus-dense-retrieval point, the role definition, and the measurement frame are drawn from Mike King's interview with Advanced Web Ranking: Optimizing for the New Search: How Relevance Engineering Is Reshaping SEO. Position weighting and source-tier classification reflect BeCited's audit methodology.