• Stop being a LURKER - join our dealer community and get involved. Sign up and start a conversation.

AI SEO or GEO building ideas

Great thread, I'm playing catchup here. Gregg's buzzword takedown should be printed on a plaque and hung in every vendor office (my company included).

The ground floor comments are spot on. We're in here debating AI-ready VDPs and RAG pipelines while most dealers can't tell you if their NAP is consistent across 50 directories. The majority of the time I talk to dealers about SEO their GBP is not optimized, just claimed and abandoned like a New Year's gym membership. Local citations a mess. Core Web Vitals failing. Canonicals broken. Most aren't ready for "Make us show up in LLMs" and most don't want to stretch before a workout.

Once the basics are actually handled, content is where leverage lives. We use Hrizn with our clients. Best content OS in the space and I'm not being polite or pitchy. Their Dealer DNA components are the real deal. They're not generating another soulless Silverado blog post that reads like it was written by a chatbot with a thesaurus. They're building content tied to YOUR store, YOUR market, YOUR identity. With the March spam update live, every dealer running copy-paste content across 12 rooftops has gotta be rethinkin' their content strategy.

Where I'd really push this thread selfishly... the biggest problem in automotive isn't content or infrastructure. It's that every dealership's context lives in the GM's gut and gets distributed via vibes and monthly vendor phone calls (if they make them). 25+ SaaS platforms all operating in silos. Website vendor gets one story. Ad agency gets another. BDC is out there freelancing. We've been running on gut-distributed context for 30 years and calling it a digital strategy.

What could soon replace gut is a context or intelligence syndication layer. Capture what makes the dealership unique, update in real time, and push it everywhere. The dealers who win next won't be the ones with the most tools. They'll be the ones whose intelligence is actually organized and accessible instead of trapped in one person's head or gut. There are tons of providers building this layer into THEIR applications. To me, that's the same walled garden problem we've struggled with in this industry for the last 25 years. Every vendor wants to be the brain. Nobody wants to be the nervous system. Content, Ads, comms could all be leverage and operate harmoniously.

What's missing is a single context engine that syndicates intelligence out to every provider AND pulls the invisible stuff back in. Training data. Cross-platform comms. API usage and cost. Inventory. The operational signal that's actually driving output and nobody's monitoring because it's buried across 30 logins nobody opens. Who knows. This is moving so fast it's tough to keep up.
 
  • Like
Reactions: DjSec
One thing I've done to adapt to 2026 SEO and LLMs is to consider each page (other than the SRP) as a Landing Page. Google's AI overview, in particular, right now just wants answers and so traffic may not necessarily come through the front door like it has in the past.

Knowing that we need to move away from basic 'templated' secondary pages that only focus on that content. Instead I'm including inventory on the "We Buy Cars" page because I don't know how they got there but maybe they haven't see the dealer's inventory yet. I see too many dealership pages where the finance page is just the dang credit app and NO TRUST factors shown. Or About Us is a boring generic paragraph with fill in the blanks for that one dealership. That's not good enough (never has been really)

This goes for VDPs as well. Someone mentions all car sites have the same exact data on them, that's true. So we are including additonal sections like "Why Buy From Us" content, the google map pack and dealership info, even Customer Testimonials, etc. A car buyer has more ways than ever to see your cars without visiting your website... so when they finally do land there the content on the page should sell them on the WHY US instead of assuming there is already trust there.

Lastly, building out more filtered inventory pages like "[body style] for sale in [city]" with relevant, relatable, local content and some FAQs instead of just a filtered list of vehicles. LLMs need REASONS to choose you...

1775078011149.png

1775078075201.png
 
@Alex Snyder appreciate that.

Here's the plain-language version for any dealer reading this thread: The core problem: Your DMS has all the data. Your website has all the inventory. But when a shopper asks ChatGPT or Perplexity "find me a family SUV under $35k near me," most dealers simply don't appear — not because the inventory isn't there, but because the pipeline between the data and the AI is broken.

Three reasons that pipeline breaks:

Infrastructure — most dealer sites are too slow, too JavaScript-heavy, or actively blocking the new generation of AI crawlers. The AI never even sees the inventory.

Comprehension — even when the AI does crawl the page, it reads like a spec sheet built for a 2015 Google bot. The AI can't extract a confident answer from it, so it doesn't recommend it.

Trust — AI models cross-reference everything. A dealer with inconsistent information across their website, Google profile, and review sites gets filtered out before a recommendation fires.

What we've been mapping in this thread is essentially a three-layer spec for what an AI-ready dealership looks like in 2026 — and a controlled test to prove which layers actually move the needle.

The dealers who get this right in the next 12 months will have a significant first-mover advantage. The ones waiting for their website provider to solve it for them will be waiting a long time.
 
  • Useful
Reactions: Alex Snyder

1. Machine Trust = Agree

  • Fast site
  • Clean HTML (no JS walls)
  • Proper schema
  • Real availability
Your correct: if that fails, nothing else matters!



2. Comprehension = Agree

  • Natural language
  • Real answers to real questions
  • Clear pricing
This is the most underrated layer in the industry right now.



3. Recommendation Confidence = Agree

  • Reviews
  • pricing clarity
  • next steps
However I would add AI doesn't just evaluate a single page on its own, it would look at consistency across pages, site-wide signals, entity-level trust, and even “Does the dealer look reliable across the web?”!

So those things would also need to be created for the test site.



I agree ...



Sadly you are correct and it is the most important part, it affects lawsuits, fines, conversions, rankings, and everything thing you do on your site. Since this is the most important part we will build it to meet and exceed all current specs, this way it doesn't affect any test.



Should we test to see if it affects:
  • discovery
  • ranking
  • selection across sources
Your addition to layer 3 is the right call and honestly it's the piece most people miss entirely.

A single VDP doesn't exist in isolation to an AI model. It's evaluating entity-level signals — does this dealer appear consistently across Google Business Profile, third-party review sites, social, and the open web? Do the NAP details match? Is the pricing consistent across surfaces? A perfect VDP on a dealer with weak entity signals still loses to a mediocre VDP on a dealer with strong off-page trust.

So the test site would need to be built with full entity coherence from day one — not just the page, but the entire web presence behind it.

On your three test dimensions — yes to all three, but I'd sequence them:

Discovery first. Does the unit even appear in an AI response at all? This is a binary pass/fail on layer 1 and entity trust. If you can't clear this bar, the other two don't matter.

Selection second. When the dealer does appear, does the AI recommend that specific unit or just the dealer generally? This is where layer 2 — the comprehension layer — does its work.

Ranking third. Across multiple sources surfacing the same inventory type, where does this dealer land? This is the hardest to isolate cleanly because it's influenced by everything simultaneously.

Running them in sequence also gives us a clean diagnostic — if a dealer fails at discovery, we know exactly which layer broke down and why.
 
  • Like
Reactions: DjSec
@Dealerrefresh Fam - FWIW - Google’s March 2026 core update is the kind of rollout that gets people staring at rankings every few hours and trying to “fix” things that are still moving.

That is usually the wrong move.

Google began rolling this core update out on March 27, 2026, said it could take up to two weeks, and as of April 4 it is still underway. Our take at Hrizn is this: treat this as a quality reassessment event, not a panic event.

For dealers, We think a few things matter most:

1. This probably won’t hit every page type the same way.
Some stores are going to feel it on fixed ops pages. Others on model research, comparison content, or blog sections. Some will barely move at first and then shift later. Broad core updates rarely hit every market, query cluster, and template evenly.

2. Thin, templated, interchangeable pages are the obvious pressure point.
If your service pages look like every other store’s, if your model pages are just paraphrased OEM facts, if your blog is mostly generic commodity content, this is the kind of update that can expose that.

3. This is bigger than “SEO content.”
For dealers, this is really about the full content layer Google uses to decide whether your store is the best local answer for the user. That includes service content, model research, VDP ecosystems, blog content, authorship/trust signals, and page experience.

4. Don’t do something stupid mid-rollout.
Do not rewrite everything while it’s still rolling out.
Do not nuke whole site sections impulsively.
Do not assume every decline means “AI content got penalized.”
Do not ignore leads, engagement, and conversions while obsessing over one keyword.

5. The old playbook keeps getting weaker.
“Publish more pages” used to be enough to create opportunity. That is not a safe assumption anymore. The stronger long-term advantage is content that is actually useful, dealership-specific, locally grounded, technically accessible, and tied to real expertise and trust signals.

My read:

This update is another reminder that the market is moving away from content volume and toward content integrity.

Rather than:
  • more pages
  • more keywords
  • more filler
  • more rewritten OEM copy
Dig deeper on more:
  • real service depth
  • real shopper help
  • real local context
  • real authorship/review
  • real dealership intelligence

That doesn’t mean AI is the problem... Google is fine with quality AI assisted content. - Commodity content is the problem.

Curious what the community is seeing so far:
Are your movements showing up more on service, model research, blogs, or local pages?

Original deep dive article here if anyone wants the full Hrizn breakdown:
March 2026 Google Core Update: Automotive Industry Impact
 
We're in here debating AI-ready VDPs and RAG pipelines while most dealers can't tell you if their NAP is consistent across 50 directories.
You’re not wrong however most dealers don’t have a “marketing problem,” they have a systems problem.

NAP inconsistency, broken GBP profiles, Core Web Vitals… that’s not a knowledge gap. That’s a lack of structure and accountability across 10+ disconnected vendors.

Even if you “fix the basics,” they drift again because there’s no central source of truth keeping everything aligned.

What’s interesting about your point on context is it reframes the whole thing:

Before we talk about AI-ready VDPs or RAG, shouldn’t there be a baseline layer that enforces consistency first?
  • One place that defines NAP and pushes it everywhere
  • One place that holds dealership context (inventory strategy, local demand, positioning)
  • One place every vendor pulls from instead of guessing
Otherwise we’re just stacking AI on top of disorder.

Feels like the real opportunity isn’t another tool but something closer to a shared intelligence layer that keeps the fundamentals locked in while everything else builds on top.
Most aren't ready for "Make us show up in LLMs" and most don't want to stretch before a workout.

It’s not just that dealers don’t want to stretch before the workout… it’s that there’s nothing in place forcing consistency even when they do.

You can fix NAP, clean up GBP, improve Core Web Vitals but 90 days later it drifts again because every vendor is operating off their own version of reality.

So when someone says “make us show up in LLMs,” they’re really asking for outcomes on top of a system that can’t even maintain its own baseline.

Until there’s something enforcing consistency and feeding clean, structured context into every channel, AI visibility is going to be random at best.

Otherwise it’s just layering intelligence on top of disorder.

Once the basics are actually handled, content is where leverage lives.

Generic AI content is boring!

If every dealer can generate the same “Top 5 trucks for towing” article, there’s no reason for Google or an LLM to surface yours over anyone else’s.

The leverage isn’t in producing more content, it’s in producing context-rich content that can’t be replicated:
  • Inventory-aware (what you actually have on the lot)
  • Market-aware (what people in your area are buying)
  • Dealership-specific (why your store is different)
That’s the kind of content an LLM can trust, retrieve, and recommend because it’s grounded in real data, not templates.

Otherwise it’s just noise competing with noise.

the biggest problem in automotive isn't content or infrastructure.

I’d argue content and infrastructure are real problems… just not the root cause.

Duplicate content, 45-second load times, broken ADA compliance, messy code ... those kill performance and rankings. If the site can’t load or be indexed, nothing else matters.

But most of that isn’t happening because dealers chose to ignore it, it’s happening because developers chose to ignore it.

Different vendors touching the same site, no shared standards, no central source of truth… so you end up with:
  • bloated templates
  • duplicated content across rooftops
  • performance issues no one owns

It's that every dealership's context lives in the GM's gut and gets distributed via vibes and monthly vendor phone calls (if they make them). 25+ SaaS platforms all operating in silos. Website vendor gets one story. Ad agency gets another. BDC is out there freelancing. We've been running on gut-distributed context for 30 years and calling it a digital strategy.

This is probably the most accurate description of how most dealerships actually operate.

“Gut-distributed context” is exactly it, but the real issue is that the context never gets formalized into something systems can use.

It lives in conversations, not in data.

So every vendor ends up reconstructing their own version of the business:
  • Website reflects one positioning
  • Ads target a different audience
  • BDC handles leads with a completely different message
Not because anyone’s wrong, but because there’s no shared, source of truth.

Information needs to be captured and used:
  • updated in real time
  • accessed by every system
  • and enforced across touchpoints
What could soon replace gut is a context or intelligence syndication layer. Capture what makes the dealership unique, update in real time, and push it everywhere. The dealers who win next won't be the ones with the most tools. They'll be the ones whose intelligence is actually organized and accessible instead of trapped in one person's head or gut. There are tons of providers building this layer into THEIR applications. To me, that's the same walled garden problem we've struggled with in this industry for the last 25 years. Every vendor wants to be the brain. Nobody wants to be the nervous system. Content, Ads, comms could all be leverage and operate harmoniously.
I like where you’re going with this, especially the “nervous system vs brain” point.
  • Capture what makes the dealership unique (inventory strategy, local demand, positioning)
  • Structure it so AI can use it
  • Then syndicate it out to every system ... ads, AI, CRM, etc.
So instead of each platform trying to figure out the dealership, they’re all pulling from the same source.

What's missing is a single context engine that syndicates intelligence out to every provider AND pulls the invisible stuff back in. Training data. Cross-platform comms. API usage and cost. Inventory. The operational signal that's actually driving output and nobody's monitoring because it's buried across 30 logins nobody opens. Who knows. This is moving so fast it's tough to keep up.

This is where it gets interesting.

What you’re describing sounds like a layer that sits above all of it:
  • Pulls in signals from every system (inventory, CRM, ads, comms)
  • Structures that into usable context
  • Pushes it back out so every platform is operating from the same intelligence
So the website, ads, BDC, etc. aren’t trying to figure things out independently, they’re all reading from the same source.
 
One thing I've done to adapt to 2026 SEO and LLMs is to consider each page (other than the SRP) as a Landing Page. Google's AI overview, in particular, right now just wants answers and so traffic may not necessarily come through the front door like it has in the past.

Knowing that we need to move away from basic 'templated' secondary pages that only focus on that content. Instead I'm including inventory on the "We Buy Cars" page because I don't know how they got there but maybe they haven't see the dealer's inventory yet. I see too many dealership pages where the finance page is just the dang credit app and NO TRUST factors shown. Or About Us is a boring generic paragraph with fill in the blanks for that one dealership. That's not good enough (never has been really)

This goes for VDPs as well. Someone mentions all car sites have the same exact data on them, that's true. So we are including additonal sections like "Why Buy From Us" content, the google map pack and dealership info, even Customer Testimonials, etc. A car buyer has more ways than ever to see your cars without visiting your website... so when they finally do land there the content on the page should sell them on the WHY US instead of assuming there is already trust there.

I agree, the real problem is every dealership site is identical:
  • Same OEM data feeds
  • Same trim descriptions
  • Same specs
  • Same “About Us” garbage
  • Same “Apply for Financing” forms
The VDP needs to answer “Why YOU?”

The information about the car is already everywhere.

So your VDP should NOT lead with:
  • Engine specs
  • MPG
  • Features
It should lead with:
  • Why this truck is better from your lot
  • What problem it solves for this specific buyer
  • What risk you remove
The VDPs need to be turned into decision engines, persuasion systems.

“Best for: towing / work / family / off-road”
“Who should NOT buy this”
“Local use cases (city hills? highway? work truck?)”
“Ownership insights (maintenance reality, not brochure BS)”

Then you’re doing something LLMs can’t easily replicate.

Lastly, building out more filtered inventory pages like "[body style] for sale in [city]" with relevant, relatable, local content and some FAQs instead of just a filtered list of vehicles. LLMs need REASONS to choose you...

This is one of the few areas dealerships have left to them.
  • “Best trucks for construction workers in [city]”
  • Best trucks for heavy towing in [city]
  • Best trucks under $20,000 in [city]
 
Your addition to layer 3 is the right call and honestly it's the piece most people miss entirely.

A single VDP doesn't exist in isolation to an AI model. It's evaluating entity-level signals — does this dealer appear consistently across Google Business Profile, third-party review sites, social, and the open web? Do the NAP details match? Is the pricing consistent across surfaces? A perfect VDP on a dealer with weak entity signals still loses to a mediocre VDP on a dealer with strong off-page trust.

So the test site would need to be built with full entity coherence from day one — not just the page, but the entire web presence behind it.

On your three test dimensions — yes to all three, but I'd sequence them:

Discovery first. Does the unit even appear in an AI response at all? This is a binary pass/fail on layer 1 and entity trust. If you can't clear this bar, the other two don't matter.

Selection second. When the dealer does appear, does the AI recommend that specific unit or just the dealer generally? This is where layer 2 — the comprehension layer — does its work.

Ranking third. Across multiple sources surfacing the same inventory type, where does this dealer land? This is the hardest to isolate cleanly because it's influenced by everything simultaneously.

Running them in sequence also gives us a clean diagnostic — if a dealer fails at discovery, we know exactly which layer broke down and why.

I’ve started building out a context engine to normalize NAP, inventory, pricing, and dealer identity, then syndicate that across site, feeds, structured data, and third-party surfaces.

I’m also layering a retrieval system on top of that so both the site and any AI-facing experiences are pulling from the same source.

Using a clean domain as a test environment right now to isolate discovery vs. entity strength.