Findings

AI gives itself five stars

Machine-written restaurant reviews sound human--to humans.

Alex Eben Meyer

Alex Eben Meyer

View full image

If you use online reviews to help you decide which restaurants, hotels, movies, or gadgets are worth your time and money, perhaps you should think twice about your sources.

Given recent advances in generative AI, some of that human-sounding writing about very human-sounding experiences might be unrelated to anything happening in, well, the real world. And, according to a recent study by Yale SOM professor
Balázs Kovács, we humans are not very good at figuring out which is which.

Kovács conducted two experiments, using 301 randomly selected participants from an online survey site, split between the two studies. The results were published in Marketing Letters.

In the first, he took restaurant reviews from 2019—before the release of sophisticated generative AI—and fed them to GPT-4 Turbo (a recent iteration of ChatGPT). He asked GPT-4 to generate reviews of the same restaurants, specifying that they include typos and other human-style quirks.

He mixed the one hundred AI-generated reviews with one hundred real reviews, and then asked participants to identify which were written by humans and which by AI. They correctly identified the source only about half the time: no better than chance.

For the second study, Kovács randomly selected ten reviews for each of a hundred restaurants. He directed GPT-4 to create fictional reviews of those restaurants, without reusing any language. Participants then judged the AI-written reviews on a five-point scale, from “most likely human” to “most likely AI.” The result: participants thought human-generated reviews were written by humans only 53 percent of the time, but they thought AI-generated reviews were written by humans 64 percent of the time. In other words—at least to these participants—GPT-4 often sounded more human than real human beings.

Beyond the issue of restaurant reviews, Kovács notes that “the question of whether we can trust what we read online is becoming more urgent.” He is concerned that the increasing use and sophistication of AI raises major economic and ethical issues—including erosion of public trust, in a range of many areas.

Post a comment