• Stop being a LURKER - join our dealer community and get involved. Sign up and start a conversation.

Attribution: Do you give the last touchpoint all of the credit for a sale?

Hey @jon.berna, will you explain this phrase to me? Not asking to be a jerk and this isn't a "gotcha" attempt, I'm here to learn.

It seems to me that confidence in the complete model relies entirely upon confidence in the accuracy of the estimations of transition probabilities. Is that correct? In other words, if the estimate of transition probability from C1 to C2 is inaccurate, the entirety of the equation is flawed. This would then be exacerbated by the potential error rate for each estimation in series as it compounds the error and variability of the equation.

1. Is that a fair assessment of the analysis above? What am I missing?
2. If so, what steps did you take, tests did you perform, behavior did you observe etc. to assure that your estimations were within an acceptable statistical error rate?

Basically in this model because each is a random variable, you'll notice for 4 states the probabilities add up to 400%. For your second question I think what you are getting at is the debate between Bayesian models and Markov models which is whether or not states/choices/outcomes should be thought of as independently and random (Markov) or if we should include some sort of historical information to define our probabilities (Bayesian). That said I put a huge asterix on all of this, I did good maths just like I did good englishes.
 
Hey @jon.berna, will you explain this phrase to me? Not asking to be a jerk and this isn't a "gotcha" attempt, I'm here to learn.

It seems to me that confidence in the complete model relies entirely upon confidence in the accuracy of the estimations of transition probabilities. Is that correct? In other words, if the estimate of transition probability from C1 to C2 is inaccurate, the entirety of the equation is flawed. This would then be exacerbated by the potential error rate for each estimation in series as it compounds the error and variability of the equation.

1. Is that a fair assessment of the analysis above? What am I missing?
2. If so, what steps did you take, tests did you perform, behavior did you observe etc. to assure that your estimations were within an acceptable statistical error rate?

The great thing about web data is there is a ton of it. We also added the CRM (leads) and DMS (sales and ROs) data as nodes. This isn't a static model, rather a trained data set using machine learning (python to R). The prediction allows you to determine the next node probability and that definitely has use cases. There are also others. For example we came up with a concept called marketing resiliency. For this consider an example of a power grid and sub stations. In order to prevent black outs energy companies create redundancies so the electricity can path around a substation that goes out. Same goes for marketing. If you remove a node more traffic paths through the other nodes. Therefor each carries more of the weight creating a higher risk factor. Basically to many eggs in one basket.

In terms of error testing that is part of the model validation and input feature tuning. For that process I believe (if I can recall) we used an out of bag testing procedure.
 
  • Like
Reactions: BenHadley
Did Google Just Kill Independent Attribution?

https://adexchanger.com/analytics/did-google-just-kill-independent-attribution/

Just as dealers will accept them eventually...

Winners And Losers

So, who wins and who loses?

Obviously, the hardest hit are non-Google MTA providers, including solution vendors and consultants.

Not all MTA vendors have a tag they deploy to create their own ID. Those that do are the larger and longer-established platforms such as Nielsen (Visual IQ), Neustar (MarketShare) and Flashtalking (Encore). Others that relied on log files face a difficult path. And still others, such as Oath’s Convertro, just gave up the stand-alone battle.

To survive, independent MTA vendors will have to focus on their omnichannel models, pairing MTA and MMM for a holistic view of online and offline media. They will need wide reach through their own data collection. And they will in effect need a unique user ID that is able to deduplicate across many data sources and devices. This requires a roster of partnerships, including cross-device data vendors, that themselves face challenges.

Also affected are agency analytics teams and MTA consultants. Without log files to analyze, they’re reduced to the less lucrative role of report jockeys. Being smart, some will refine their expertise to focus on ADH, and some will switch to different problems. But some jobs will change.

Beyond MTA, any provider that used DIDs somewhere in their workflow is affected. These include onboarders, marketing analytics and location data providers, and others.

And what about Alphabet/Google? It’s a massive media company that earns $1 in every $3 spent around the world on digital ads. Ad serving is a minuscule part of its business. Losing a few clients, even large ones, won’t have any impact on Alphabet’s life so long as they continue buying paid search and video ads. Which they will.

And what about consumers? Privacy is important and GDPR is real. Many Google partners publicly applaud this move while privately asking questions. For example:

  • Since the DID is anonymous and can’t be used for ad targeting, how does stripping it out protect privacy?
  • How does more accurate campaign measurement hurt consumers?
  • Why does every move Google makes in the name of privacy — such as removing keywords from referring URLs — help its business?
I’m not a lawyer and can’t comment, but I appreciate the questions.

Google is challenging marketers to scramble around it or accept its terms. In the end, most marketers will write down pros/cons, take a deep breath, and say, “OK, Google.”
 
Last edited:
We did a study with Transparency.ai. It's a joke in my opinion. It was an opportunity for AutoTrader to come in and tell us how they generate 90% of our business and income. Whoever is suggesting the study, is paying for the study, and in turn, has their own motive and ego to protect.
Knowing what I know now, I would not waste your time.

Until CRM's find a way to import attribution or pathing data to each lead (I don't know of one) and produce elaborate reports. Your CRM will stick to showing you either the first or last point of contact, and every other third party trying to justify their existence.

I sat through a demo with DealerInspire at one point (though we don't use them) - and I think that they are getting close to path tracking better than any other provider I have seen. Though I don't know where they would be able to push that data as probably no one else supports it yet.

The most valuable pieces of data by far that I have seen ar 1. Google Analytics and 2. VistaDash (does a good job of breaking up some of the attribution data).

From another thread here.
 
  • Like
Reactions: Alex Snyder
@BenHadley and @jon.berna,

Thank you both for the follow-up.

I am more familiar with the Bayesian than the Markov model Ben. That clarification helped. thanks.

"Marketing resiliency" is a great term Jon. Have you trademarked it? It has been years ago now, but when I worked for the purple marketplace, dealers were certain that the audience was shared between purple and orange and that it didn't much matter which you chose because the consumer would visit both. Orange and purple had different variants of the same study citing a 25% crossover. Theoretically, canceling one of those nodes should affect the performance of the other. That makes sense to me. Thanks again for taking the time to answer my questions.
 
  • Like
Reactions: BenHadley
Google Hasn’t Killed Attribution Modeling – It Never Really Worked To Begin With
https://adexchanger.com/data-driven...odeling-it-never-really-worked-to-begin-with/

Following Google’s announcement to no longer share user IDs externally, some industry voices have raised concerns about how this move would harm independent multitouch attribution (MTA).

Indeed, it is hard to deny that any attribution analysis, which attempts to determine the efficiency of different ads, would lack a big piece of the puzzle without Google’s data.

However, how useful was attribution modeling even before this happened?
 

"Click trails, attribution, and all those other fancy marketing trackers are destined to die a hard death at the hands of legislation. And we’ve got a major WTF to Americans on where that legislation is coming from. Trust us when we say it will piss you off. Joe Berna has some CRAZY points and he is no doubt one of the smartest people in the car business. That’s why we call him “DR Truth” because he is the truth of DealerRefresh."

LOVE THIS, REALLY GOOD STUFF and HYPER USEFUL and REALISTIC

Fantastic from 11:00 to 40:00 mark in reference to tracking and identity resolution and the social platforms that are blatantly lying about their user numbers, especially Twitter (bots).
  1. Since Americans hate to be tracked, is there a future for most independent attribution tools (legislation)? It sounds bleak to me, especially with midterms coming shortly with both parties in agreement on it.
  2. Two versions of a product or company to satisfy legislation!? There's fundamental issues there.
  3. Google stopped passing unique identifiers? I had no idea, that's crazy.
  4. Great stuff from Mr. Synder in terms of his content matters most and 3P attribution being dead comments @ ~33:00 mark. Unless groups lie about the totality of their data collection and keep pitching it to dealers? Pointless...
  5. "Too much noise and not enough signals" - :bow: @jon.berna
*I believe IHS had purchased Datium. Ahhh... you guys answered that.
 
Last edited:
  • Like
Reactions: Alex Snyder