The "human-in-the-loop" finding makes sense — the responsiveness gains are real but so is the failure mode when AI goes off-script.
Had a friend test this firsthand last month. Called a dealership, AI agent picked up, conversation was going fine. He mentioned he was driving on the highway and wanted to keep it quick. The AI helpfully suggested he pull over at a nearby Walmart to continue the conversation. No Walmart anywhere near that area.
Small moment, but that's exactly the "digital handoff risk" the study flags — the customer doesn't know if they're talking to a person or a bot, and when the bot hallucinates, it erodes trust in the whole interaction. A human would have just said "no problem, I'll keep it brief."
The 90% response rate improvement is impressive. But response rate and conversion rate are different things — curious if the study tracked what happened after the first response.
Had a friend test this firsthand last month. Called a dealership, AI agent picked up, conversation was going fine. He mentioned he was driving on the highway and wanted to keep it quick. The AI helpfully suggested he pull over at a nearby Walmart to continue the conversation. No Walmart anywhere near that area.
Small moment, but that's exactly the "digital handoff risk" the study flags — the customer doesn't know if they're talking to a person or a bot, and when the bot hallucinates, it erodes trust in the whole interaction. A human would have just said "no problem, I'll keep it brief."
The 90% response rate improvement is impressive. But response rate and conversion rate are different things — curious if the study tracked what happened after the first response.