A Data-Driven Framework for AEO

Optimining for AI Overviews

Artificial Intelligence SEO Article
20 mins

ü Most of the advice circulating about AEO is based on prediction and guesswork.

If you work in digital marketing, you have almost certainly heard this in the last six months: "We are invisible in ChatGPT."  or "We're not showing up un the AI Overview." Users are getting answers directly from AI platforms without ever clicking through to a website. For marketers who have spent years mastering search engine optimisation, this represents the single largest disruption since mobile-first indexing.

But here is the challenge: most of the advice circulating about Answer Engine Optimisation (AEO) is based on prediction and guesswork. Tools simulate what a user might ask, estimate what an AI might return, and then recommend actions based on those assumptions. That approach has a fundamental flaw, it does not reflect what is actually happening.

Over the past several months, working with organisations ranging from regional tourism boards to digital marketing training companies, I have developed a data-driven framework that moves beyond prediction and into measured reality. Here is the thinking behind it, and how you can apply it to your own brand.

The Three Shifts Every Marketer Needs to Understand

Before diving into the process, it helps to understand why this matters so urgently. Three simultaneous shifts are reshaping the marketing landscape.

First, return on advertising spend is declining. AI-driven ad platforms like Meta and Google are optimising for the easiest conversion, typically retargeting people who have already seen your content. The result is inflated costs and diminishing returns on new customer acquisition. Average cost per acquisition has risen by roughly 20% in many sectors.

Second, zero-click search is becoming the norm. AI Overviews, AI Mode,  ChatGPT, Perplexity, Claude,  these platforms answer questions directly. Users are not clicking through. This means your traditional attribution models, which relied on tracking clicks from Google or LinkedIn, are breaking down. If there is no click, there is no attribution.

Third, you now need to optimise for six platforms, not one. It is no longer just Google. You need visibility across ChatGPT, Google AI Mode, Perplexity, Claude, Copilot, and increasingly Gemini. And here is the uncomfortable truth: Bing now powers three of those platforms (ChatGPT, Copilot, and Perplexity to a degree), so Bing optimisation suddenly matters.

Why Prediction-Based Approaches Fall Short

Most AI visibility tools on the market today follow a similar pattern. They predict the prompts a user might type, simulate the AI response, and compare your brand’s presence against competitors. This is useful as a starting point, but it is fundamentally limited.

The problem is that predicted prompts rarely match what real users actually ask. The volume assumptions are speculative. And the competitive landscape shifts so quickly that a snapshot taken today may be irrelevant by next week.

Think of it this way: traditional SEO moved from keyword guessing to Search Console data years ago. We stopped guessing what people searched for and started measuring it. AI visibility needs the same evolution.

A Framework Grounded in Reality: The Four-Stage Process

Here is the framework I have been refining. It has four stages, each building on the last.

Stage 1: Establish Your Baseline with AI Visibility Auditing

Start by understanding where you stand today. This means running AI-generated prompts across multiple platforms (ChatGPT, Google AI Mode, Perplexity, Claude, Copilot, Gemini) and measuring two distinct things: mentions and citations.

A mention is when the AI names your brand in a response. A citation is when it links to your website. Ideally, you want both. At a minimum, you want mentions, because in a zero-click world, brand recognition in an AI response is equivalent to appearing on page one of Google.

The prompts should be derived from your website’s actual audience personas. If your website is well-established, its content already reflects your target audience. Extract the personas, generate realistic prompts for each, and run them. This gives you a holistic AI Visibility Score  a composite of share of voice, citation rate, position, and content quality.

Crucially, this also needs to be geo-localised. What ChatGPT returns for a user in France about your brand is different from what it returns for a user in the UK. If your audience is international, you need to proxy your requests from the relevant locations.

Stage 2: Move from Simulation to Reality with Traffic Interception

This is where most tools stop. But this is where the real insight begins.

Every major AI platform uses specific user agents when crawling your website: ChatGPT has its own, Claude has its own, Google AI Mode has its own. By intercepting this traffic at the CDN level (Cloudflare, Vercel, or similar), you can see in real time which AI bots are visiting, which pages they read, how long they stay, and crucially, you can infer the prompts that triggered those visits.

This is the equivalent of moving from keyword prediction to Search Console data. Instead of guessing what AI systems are looking for, you are measuring it. You can see that ChatGPT sent 12,000 visits last week, that it is predominantly reading your food-related content, and that Google AI Mode  which barely registered a month ago, now accounts for 6.7% of your AI traffic.

The inferred prompts from real traffic then replace your simulated prompts, giving you a grounded dataset to work from.

Stage 3: Optimise Based on Evidence, Not Guesswork

With real data in hand, you can now generate targeted recommendations. The system should tell you, for each prompt where you are not being mentioned, why you lost and what your competitors did differently. More importantly, it should identify semantic cluster opportunities , topic areas where competition is low but potential impact is high.

For example, a tourism organisation might discover that the cluster around “traditional local markets” has very low saturation but high user intent. That is an immediate content creation opportunity that will move the needle.

A critical insight here is the role of corroboration. When an AI platform finds content on your website, it does not just take it at face value. It goes to Reddit, YouTube, Quora, and other sources to corroborate the information. If third-party sources confirm what your website claims, you are far more likely to be cited. This means that an active presence on platforms like Reddit not with promotional posts, but with genuine, value-adding comments on existing discussions, can significantly boost your AI visibility.

Stage 4: Measure Attribution in the Zero-Click World

The final stage closes the loop. Once you have implemented recommendations, whether that is creating new content hubs, engaging on Reddit, or restructuring pages for better AI readability, you track the impact by monitoring changes in your mentions, citations, and AI bot traffic patterns over time.

This is the new attribution model. You audit, you ground the audit in real traffic data, you implement, and you measure the change. It is not as clean as a click-through report, but it is honest and actionable. And it is the closest thing we currently have to a reliable feedback loop in the AI-driven search landscape.

What This Means for Your Marketing Strategy

If you take one thing from this article, let it be this: AI visibility is not just a feature to add to your existing SEO workflow. It is a parallel discipline that requires its own measurement, its own optimisation, and its own attribution model.

The organisations that will thrive in the next two years are the ones that stop treating AI search as an afterthought and start treating it as a primary channel. That means investing in real data collection (not just predictions), understanding the corroboration economy (Reddit, forums, independent reviews matter more than ever), and building a measurement framework that does not depend on clicks.

The good news is that this is still early. Most organisations have not even started. Which means the window of opportunity to establish your brand’s AI presence is wide open - but it will not stay that way for long.

About the Author

Vincent Sider is an AI consultant, trainer, and entrepreneur. He is the founder of GeoMaestros, a platform that measures and optimises AI visibility across ChatGPT, Google AI Mode, Perplexity, Claude, and Copilot. He has trained over 1,000 professionals on AI topics and works with organisations across tourism, finance, and professional services.

To audit your own AI visibility, visit geomaestros.com.

Build a ü free personalised ¥ learning plan to see our course recommendations î for you

Free for 30 days

Build a å free personalised ¥ learning plan to see our course recommendations î for you

Free for 30 days