• Stop being a LURKER - join our dealer community and get involved. Sign up and start a conversation.

HBR article, Beware hyper-personalization

This is why I'm pounding the table on being very very cautious about all of the spins that'll come from "big data". Big data is the ultimate play toy for mega-geeks and marketing loves to spin the crap that circles around it. From a leadership POV, you can't let the promise of big data over shoot the gaps in your customers needs.

Great point Joe. If it's a great tool to help your users, then implement it. But you better be sure it's reliable and something that gets used. Don't mess up the basics :D
 
  • Like
Reactions: 1 person
Great point Joe. If it's a great tool to help your users, then implement it. But you better be sure it's reliable and something that gets used. Don't mess up the basics :D

yup. Build what your customer wants... sounds so simple, yet it's so easy to miss the target.

(note: Everyone makes decisions with the best intent, in the case of Netflix, they over shot the target.)
 
undefined - Businessweek

From the article "Leveraging Stibo Systems' MDM platform STEP, Cars.com will manage more than 100,000 individual product references and related items in an effort to create a more satisfying shopping experience for consumers and provide a greater return for their advertisers and dealer customers."

This will really help them with their needle in a haystack problem.
 
  • Like
Reactions: 1 person
As with most website enhancements, implementing personalization should be an iterative process rather than set and forget. The feature should be tracked with your analytics tool of choice and different variations of it could be A/B tested to see which gives the best results.

I heard an interesting quote in a meeting last week concerning A/B testing on car dealer websites. The speakers point was that no individual page (VDP) gets enough visitors, nor do most car dealer websites in general get enough total visitors to mathematically do A/B testing. Their point is the traffic volume is just not there, and the inventory pages change constantly. For example Zappos could present 2 different pages for the same pair of shoes to see which one converts better. No car dealer VDP gets the same amount of traffic as a shoe product page on Zappos. Possibly some website vendors could respond to this since you see a much larger data set than any individual dealership does for their website. Thoughts?
 
I heard an interesting quote in a meeting last week concerning A/B testing on car dealer websites. The speakers point was that no individual page (VDP) gets enough visitors, nor do most car dealer websites in general get enough total visitors to mathematically do A/B testing. Their point is the traffic volume is just not there, and the inventory pages change constantly. For example Zappos could present 2 different pages for the same pair of shoes to see which one converts better. No car dealer VDP gets the same amount of traffic as a shoe product page on Zappos. Possibly some website vendors could respond to this since you see a much larger data set than any individual dealership does for their website. Thoughts?

Great topic Bill. Yes, low volume in testing will cause all kinds of noise in the results. So, the lower the volume, the longer you need to leave the spilt test up and running so you can gather enough volume to meet the minimum.

I am not exactly sure what the minimum is. I've fwd your thoughts to our data scientists for some clarity on this.

More as it comes...
 
  • Like
Reactions: 1 person
Life is cool when you have a small army of data scientists to kick ideas around with. I presented this question:

Subject: "…do most car dealer websites in general get enough total visitors to mathematically do A/B testing"

My Questions:


  • #1). Is there a sharp threshold where the data goes from fair quality to high quality? Or, is there a wide grey area where the data quality improves as the volume increases?
  • #2). In the case of low volume sites, the test would have to be up longer to collect more data. Is there any unforeseen risk to having the test run for a loooong period?


Some early responses:

"..When an A/B or multivariate test runs, it’s all about reaching statistical significance. With that, you get to set a confidence threshold that’ll set some more realistic expectations around causation.


There are a number of tools online like: A/B testing statistical significance calculator - Visual Website Optimizer - Visual Website Optimizer"


And...

"Question 1:
...as long as the gap between conversion rates is big enough, a test could have low traffic. Lets say we test a dealer with only 100 visitors to a page. We split the traffic and get 50 visitors for each of the two variations. Variation A gets 1 conversion and Variation B gets 7 conversions for a 2% and 14% conversion rate respectively. This result is statistically significant because the gap between the two conversion rates is large. If Variation B had received 2 conversions, the test would need to run for a longer period.


This calculator tells you more info. Feel free to play with it. Split Test (A/B Test) Calculator


Question 2:
I see no risk in running tests for a long period. If anything, the longer the better. Because traffic cycles on weekly basis and automotive traffic cycles on larger time spans, running a test through these cycles give me more confidence in the results."



HTH
Joe