• Stop being a LURKER - join our dealer community and get involved. Sign up and start a conversation.

"Useless and yet indispensable" // Local Marketing Insider #037

jakehughes

Hat Trick
Feb 17, 2021
93
74
Awards
4
First Name
Jake
LMI 37 blog cover 2.png

A little exercise: Say you went to Amazon to find a product, but this time all of the review content was gone. You would feel the vacuum. The isolation. How are you supposed to know if humidifier A is better than humidifier B?

Reviews are foundational to providing consumer perspectives online. And yet, due to their open-source nature, they’re not without flaws.

I was listening to the After Hours podcast by HBR and came across an early episode on reviews.

Harvard business professors Youngme Moon, Mihir Desai and Felix Oberholzer-Gee try to “make sense of online reviews,” where they land on the title of this article - useless and yet indispensable.

What caught my attention was their discussion on how reviews can simultaneously be hugely valuable and utterly frustrating. Their judgment at the time (2018) was that reviews are handled by the business world in a clumsy manner.

Equal parts vague for the reader of the review and lacking constructive feedback for the business.

For LMI #037, I’m going to go through some fundamental biases in online reviews so that the LMI community can use these realities to make better strategic decisions.

An Uphill Battle​

There is limited context​

Each reviewer comes to a review with a different set of life experiences.

Different interests, values, skillsets.

One’s perspective colors their reviews in ways that are not easily identifiable to the reader.

Worth considering: Responding to reviews can help add context and leads to 12% more reviews.

Quality vs. price​

Understanding how a reviewer trades off quality and price is even more challenging.

Take the case of a hotel. You can have a great experience at an objectively lower-quality hotel because the lower price associated with that experience warrants lower expectations from the start.

At the same time, you can have a bad experience at an objectively high-quality hotel, possibly taking issue with something that you would let slide at a more affordable price point.

Hypothetical: Is it possible to isolate the influence of price in reviews?

How we rate differs by region​

Cultural norms influence ratings.

There is good research on how net promoter scores (NPS) differ by country.

In Europe, 8/10 NPS is considered good. 9 is great and 10 is impeccable. It’s considered almost impossible to get a 10.

In Japan, it’s considered poor etiquette to rate any business too high or too low, regardless of performance. Japan has the lowest median NPS of 6.

In the US, the cultural standard is to give a very high rating if the experience is positive, only dropping ratings for exceptionally poor experiences. The US median NPS rating is 9.

Internationally, the US consistently has the highest average ratings.

Rating inflation leads to lost meaning​

Uber is a great example here.

Anybody who uses the app knows that unless you have a truly problematic experience you’re expected to give a 5-star rating.

Because of this, value of the rating is wildly inflated, and as a result, it is largely without meaning.

A 4.9-5.0 rating is needed to meet expectations, anything else is below the norm. There is little room for constructive and non-punitive feedback.

Extreme cases are overly represented​

The podcast hosts touch upon a historically common issue with reviews - the distribution of opinion is highly polarized, with many extreme positive and extreme negative views, and few moderate ones.

This phenomenon is well-documented in academic studies, visualized by a “J-shaped” review distribution graph.

J shaped graph.png

This outcome is driven by motivation. Those with extreme experiences will be more likely to share.

As a business, fighting this tendency will make your review content more representative of what is actually going on at your business and as a result more helpful for prospects.

Asking all of your customers for a review and making the process as easy as possible will reduce the likelihood of a polarized review distribution.

In a related study of reviews on the company review platform, Glassdoor, researchers found that pro-social incentives led to a less biased review distribution:

"Our results show that people are more likely to leave online reviews when they’re reminded that doing so helps other job seekers. Simple, pro-social incentives also led the distribution of reviews to be less biased, creating a more normal bell-curve distribution of reviews."

To the reader, we say, “best of luck”​


Combining all of what I’ve outlined above, with limited input from the review sites or businesses, the reader of your reviews is left on their own to account for these biases or, more likely, to proceed unaware.

It’s not on you​

Building a review system that limits these issues will likely be solved by the review platforms, not by local businesses.

But, here are the steps businesses can take to improve their review content:
  • Ask every customer for a review and making it simple will improve the representation of the satisfied, moderate customer.
  • When asking for a review, test “pro-social incentive” language to reinforce for the reviewer what the impact of their review could have on their peers.
  • Responding to reviews leads to 12% more reviews because customers know the business will read their feedback.
  • Grading your employees on a scale of “anything less than a perfect score is a failure” will create an Uber-like situation, inflating ratings to a point where they lose relevance to the reader. Employees will ask customers for the required perfect score and some potentially useful feedback will be lost in support of social norms.
  • Video reviews can help add context for the prospect without adding significant complexity to the request process.

Bonus: happy or not?​

At scale, even the most simplistic feedback can have value.

The company HappyOrNot partners with airports across Europe, placing a simple smiley face, neutral face, or frowny face feedback system in security lines.

The result is a real-time feedback system. Lines are ranked and the current performance is displayed for each security team to see.

The ranking system motivated poorly rated security line staff to immediately respond with better service.

I hope you had a good end to Q2 and have a fun summer ahead. Happy belated 4th.

Last week Matt posted this fantastically charming video testimonial, which Robert and Kipling submitted via Invite Video. Certainly worth the 35s watch.

To be notified via LinkedIn of new Insider articles every 2 weeks subscribe at the top of this article.

See you in 2 weeks - Jake, Marketing @Widewail

LMI Live Logo – Email – 2.png

Study shows +28% conversion when using user-generated content in ads​

The barriers are coming down between the brick-and-mortar marketer and UGC. You're not going to want to miss this one.

LMI Live #36.png
 

Attachments

  • LMI 37 blog cover 2.png
    LMI 37 blog cover 2.png
    472.6 KB · Views: 0
  • Like
Reactions: Jeff Kershner