There's a peculiar feeling haunting marketing teams right now. You check Search Console: rankings are holding. Impressions are steady. The blue links haven't vanished. And yet... the pipeline feels thinner. The leads feel colder. Something is wrong, but the dashboard says everything is fine.
That's because you're measuring the wrong thing with the right tool.
When the Search Page Stopped Being a Doorway
For twenty years, the search results page was an entry hall. You ranked high enough, users clicked through, and your website did the persuading—slowly, across multiple pages, until trust accumulated and conversion happened.
AI search doesn't work like that anymore. It's a pre-processor. It breaks the question into parts, pulls from multiple sources, synthesizes an answer, and only then offers supporting links. Your website's first impression is often no longer the blue link—it's a citation in an AI Overview, a source card, a sentence extracted to support a judgment.
This is what marketers have sensed but struggled to articulate: SEO isn't broken. The measurement framework is just too slow for the new game.
Google itself says AI Overviews still rely on the same crawlable foundations—content, internal links, page experience, structured data. The infrastructure hasn't changed. What changed is the decision path. The user is making up their mind before they click, informed by an answer that may have already summarized your position (or mischaracterized it, or ignored it entirely).
The Four-Point Audit
When I evaluate a website now, I look at four things before I even check rankings.
1. Can Your Paragraphs Be Stolen to Answer Questions?
Most articles are beautifully written for humans and useless for AI extraction. They meander through context, transitions, background, narrative flow. A person can enjoy the journey. An algorithm trying to extract a definitive answer gets lost in the prose.
A useful paragraph needs to be extractable. It should contain: a clear question, a direct answer, the conditions under which that answer applies, the exceptions where it doesn't, and the logical next step. Not every paragraph needs to be an FAQ, but every important one should be able to stand alone as a complete cognitive unit.
If AI can't lift a paragraph out of your page and drop it into an answer with confidence, that paragraph doesn't exist in the AI's universe.
2. Do Your Page Types Know Their Jobs?
The most common structural failure I see isn't thin content—it's confused content. Every page trying to do everything at once.
A Hub page organizes the landscape. A Node page explains a specific concept. A Comparison page helps someone choose between alternatives. An FAQ page handles precise objections. A Transaction page closes the deal.
When these roles blur—when your product page also tries to be an educational encyclopedia, or your blog post also tries to convert—you end up with pages that are comprehensible to no one. Google sees fragments. Users see noise. AI extracts loose, disconnected snippets that don't cohere into authority.
3. Do Your Internal Links String Together a Decision Path?
AI search accelerates the user's jump to high-intent questions: "Which one should I choose?" "Is this right for me?" "How does this compare to the alternative?"
If your article answers the knowledge question but dead-ends—no path to comparison, no case study, no pricing context, no suitability filter—that traffic enters and exits as information consumption. It never converts to commercial action.
The internal link structure isn't just SEO plumbing anymore. It's the decision architecture that turns a citation into a client.
4. Does Your Content Contain Actual Judgment?
The easiest content to produce—and the most obsolete in the AI era—is the kind that restates what everyone already knows. AI doesn't need your summary of common knowledge. It has Wikipedia. It has a thousand other summaries.
What AI lacks—and what makes a source worth citing—is discriminating judgment. Under what conditions should you do this? When should you not do this? What's the priority order? What are the common mistakes? How do you verify it's working?
Content with judgment is content with edges. It takes positions. It risks being wrong. And because of that, it's the only kind of content an AI can confidently cite as a source rather than mere background noise.
The Hierarchy of Panic
If you only watch rankings, you'll think the problem hasn't arrived yet.
If you only watch traffic, you'll notice late—when the pipeline is already dry.
But if you start tracking AI citations, brand search volume, and zero-click exposure that actually converts, you'll see where the blockage is forming before it hardens.
This is why I rarely ask "Does this page rank?" anymore. I ask:
- Can it be cited?
- Can it help a user make a judgment?
- Can it route to the next decision page?
- Can both AI and humans understand why you're trustworthy?
The Real Question
AEO isn't a rebrand. It's a different question entirely.
SEO asks: How high do I rank?
AEO asks: When someone asks a question, does the AI trust me enough to use my answer?
GEO asks: When the AI recommends solutions in my category, does it mention me at all?
The infrastructure is the same. The intent is not. You're no longer optimizing for a click. You're optimizing for inclusion in a synthesized answer that may never result in a visit—but may result in a decision.
If you have traffic, you have eyeballs, but the conversion feels stuck, the issue isn't your funnel. It's that your content is visible but not usable by the new intermediaries that now sit between you and your customer.
Fix the extractability. Fix the architecture. Fix the judgment.
Then check the rankings. They'll follow.
— James, Mercury Technology Solutions, Hong Kong, May 2026


