• Stop being a LURKER - join our dealer community and get involved. Sign up and start a conversation.

AI SEO or GEO building ideas

Eric's librarian analogy is perfect. The shift from "be the top result" to "be the source the AI references" changes the game for dealers. One thing I'd add: this same principle applies to on-site search, not just external search engines. If a shopper lands on your site and your inventory search only works through rigid dropdowns, you're forcing them to think like a database query. The dealers I've seen win are the ones making their on-site experience feel more like talking to a knowledgeable salesperson, where the shopper can express what they actually want in plain language. That's the same conversational paradigm that's making AI search engines outperform traditional ones.
 
  • Like
Reactions: DjSec
The SEO narrative in automotive has gotten detached from reality. There are only 2 providers actually solving this problem....OneKeel and Horizon. We both do it differently but....there is only two that actually know what we are doing and aren't making up how to solve for this.

There’s a growing trend of oversimplifying the problem, positioning simple automation and dashboards as if they solve systemic visibility challenges. They don’t.

Modern search isn’t a content posting problem. It’s an infrastructure problem.

If you’re not operating with a full-stack content engine, one that includes:
  • programmatic content generation at scale
  • originality and de-duplication controls (not recycled LLM output)
  • a unified intelligence layer informed by real dealership data
  • integrated RAG pipelines for contextual accuracy
  • continuous learning loops tied to performance signals
  • and orchestration across all channels and endpoints
…then you’re not solving SEO. You’re just producing more noise, and content that does nothing to move the needle.

Schema alone isn’t a strategy.
Dealer-written content isn’t scalable.
Social posts don’t move organic search in any meaningful way.

Without a connected system that aligns data, content, and distribution in real time, results will be inconsistent at best, and misleading at worst.

The industry needs to stop pretending otherwise.
 
Last edited:
  • Like
Reactions: DjSec
Google’s March 2026 spam update is live, and for dealers I do not think this is something to shrug off as just another routine rollout. Google describes it as a global spam update with a fast rollout window, but in automotive that can hit some very familiar weak spots fast… thin model pages, duplicate location content, stale promo pages, weak inventory descriptions, and other low-effort pages that have been hanging around too long.

I pulled together a breakdown here focused specifically on what this may mean for dealership websites and what teams should be checking right now:

March 2026 Google Spam Update: Protecting Your Dealership’s Search Rankings in 2026


A few things I think are worth paying attention to immediately:
  • outdated specials and event pages still indexed
  • repetitive or low-value model/research content across rooftops
  • inventory and service pages that look auto-filled instead of genuinely useful
  • weak local signals and neglected GBP activity
  • lack of structured data and clean page organization

One of the bigger misses I still see in automotive is treating “spam” like only a technical SEO issue. It is also a content quality issue. If the page is not truly helpful, differentiated, and current, it is exposed. And when landing page quality drops, paid performance can get less efficient too.

Curious what others are seeing so far…

Anyone noticing movement yet in service pages, model research pages, VDP visibility, or local pack performance?

More to come as the March Core Update shakes out (currently underway).
 
While building out some new features for my own dealer SaaS project recently, I’ve run into exactly what Matt mentioned: legacy platforms stripping rich schema and blocking agents that aren't 'standard' Google bots. It’s a massive barrier for dealers who actually want to be 'the quotable source.'

To Gregg’s point about a 'testing ground'—I’d be interested in sharing some of the raw data I’m seeing regarding how AI engines are actually consuming (or failing to consume) different types of dealership inventory feeds. If we’re moving toward a conversational paradigm where a shopper asks, 'Find me a red SUV with 3rd row seating under $35k,' the dealer who wins isn't the one with the best blog—it's the one whose data isn't being throttled by their own provider.

Has anyone else here tried to force-inject a custom LLM.txt or WebMCP toolset onto a legacy dealer site yet, or are most of you just waiting for the providers to catch up?"
 
How exactly are you planning to pass a live, 500-vehicle inventory feed with real-time pricing updates through a static llm.txt file, and can you point to any documentation where Google states they use WebMCP to index local inventory for AI Overviews rather than standard Merchant feeds?
While building out some new features for my own dealer SaaS project recently, I’ve run into exactly what Matt mentioned: legacy platforms stripping rich schema and blocking agents that aren't 'standard' Google bots. It’s a massive barrier for dealers who actually want to be 'the quotable source.'

To Gregg’s point about a 'testing ground'—I’d be interested in sharing some of the raw data I’m seeing regarding how AI engines are actually consuming (or failing to consume) different types of dealership inventory feeds. If we’re moving toward a conversational paradigm where a shopper asks, 'Find me a red SUV with 3rd row seating under $35k,' the dealer who wins isn't the one with the best blog—it's the one whose data isn't being throttled by their own provider.

Has anyone else here tried to force-inject a custom LLM.txt or WebMCP toolset onto a legacy dealer site yet, or are most of you just waiting for the providers to catch up?"
 
"You're asking the right question, Joe, but you're treating the llms.txt file as the bucket when it’s actually the handshake.

Think of it as the 'Discovery Layer.' The architecture I’m talking about is already being deployed by companies like Bright Data with their new Web MCP Server. They aren’t trying to cram a database into a text file; they use the llms.txt to point an agent to a dynamic discovery endpoint or an MCP (Model Context Protocol) manifest.

This is the exact same logic Alex Nahas (Arcade.dev) is pushing through the W3C Web Machine Learning Community Group for the WebMCP standard. In this setup:

The llms.txt acts as the map (the 'Read-only tool' in MCP terms).

It surfaces a search_inventory tool to the agent.

The agent calls that tool to query a dynamic endpoint (like the Hrizn Public API Matt mentioned) in real-time.

The AI doesn’t 'read' 500 cars; it executes a query for the specific 3 cars the shopper actually asked for.

We’re moving past 'scraping' and into 'tool-calling.' If we stay stuck on how to 'index' a static list, we’re going to be invisible when these browser-native agents start actually shopping the sites directly.

I’m curious, Joe—at OneKeel, are you guys planning to stick with traditional LLM indexing for your 'virtual employee,' or are you looking at bridging into the MCP tool-calling space to handle that real-time inventory bottleneck?"
 
Last edited:
You're 100% right that a static llms.txt is the wrong bucket for a 500-unit live feed. My point was more about using the llms.txt as the map rather than the pipe.

In a perfect setup, the llms.txt file points the agent to a dynamic discovery endpoint or an MCP manifest. That way, when an agent hits the site, it knows exactly where the 'live' documentation and toolsets live without having to guess via a standard crawl.

As for the Google documentation—there isn't an 'official' Google stamp on WebMCP for local inventory yet. It’s still very much in the proposal/early preview stage (as Ryan mentioned earlier). The reason it's exciting isn't because it replaces Merchant Center today, but because it offers a path for 'in-browser' agents to interact with a dealer's data directly, potentially bypassing the delays and 'data taxes' often imposed by the big listing aggregators and legacy website providers.

We’re essentially talking about the difference between Google indexing a feed and an AI Agent executing a search on the dealer's behalf. It’s a nuances shift, but a big one.

You're talking about bottom-of-funnel execution, but you're completely skipping top-of-funnel discovery. An in-browser agent isn't going to sequentially ping 18,000 individual dealer WebMCP endpoints to find a red SUV; the latency would be absurd. It’s going to ping a centralized, normalized third-party data hub. You can build the most elegant MCP manifest in the world, but if the agent doesn't already know your specific dealership holds the inventory via the primary search index or a major aggregator, it’s never going to show up to read your llms.txt map in the first place.
 
Joe, you’re right on discovery, but you’re describing the Aggregator Tax model. Global discovery and local execution are two different steps.

Perplexity, OpenAI, and Anthropic already handle global discovery (Top-of-Funnel) through their own web-scale indexes. They don't need to ping 18,000 dealers; they just need a direct handshake with the 5–10 local results that actually matter to the user.

That’s why Bright Data, Arcade.dev (WebMCP), and Hrizn are so critical—they provide the "Bottom-of-Funnel" execution layer.

Discovery tells an agent where a car might be; MCP proves it is there and lets them act. I’d rather give the "keys to the data" back to the dealer via the new W3C WebMCP standard than keep them locked in a centralized hub. Are we building for the aggregators, or for the dealers?