AI-Generated Reviews Fool Humans and Detectors, Threatening Trust in Online Platforms

A new study finds that AI-generated restaurant reviews can pass the Turing test, fooling both human readers and AI detectors.

A new study by Yale School of Management professor Balázs Kovács reveals that AI-generated restaurant reviews can pass the Turing test, fooling both human readers and AI detectors.

The findings have significant implications for the credibility of online reviews in the era of advanced language models like GPT-4.

The study was published in the Springer journal Marketing Letters on April 14, 2024.

The Power of Online Reviews

Online reviews have become a crucial factor in shaping consumer choices, with the vast majority of people relying on them to make informed decisions. But the rise of sophisticated AI language models now threatens to undermine the trustworthiness of these reviews.

Kovács conducted two experiments with a diverse group of participants recruited through Prolific Academic.

The 301 participants were split between the two studies. Their average age was 47, and 57% of them were female (56.5%). All were native English speakers residing in the United States, Canada, the United Kingdom, Ireland, or Australia.

In Study 1, participants were shown a mix of real Yelp reviews and AI-generated counterparts created by GPT-4. They correctly identified the source only about 50% of the time – no better than random chance.

Study 2, where GPT-4 created entirely fictional reviews, yielded even more striking results: participants classified AI-generated reviews as human-written 64% of the time.

AI Detectors Were Also Fooled

Kovács also tested leading AI detectors designed to distinguish human-written from AI-generated text.

He fed 102 of the reviews to Copyleaks, a publicly available AI-text recognition tool.

Copyleaks labeled all of the 102 reviews it tested as human-generated, indicating its inability to identify the AI-generated content.

Kovács fed the reviews back into GPT-4 and asked it to assess the likelihood of each review being AI-generated on a scale from 0 to 100.

Even GPT-4 was unable to reliably distinguish between human-written and its own AI-generated reviews, with most of its responses falling in the 10-20 range, suggesting that it found both types of reviews to be largely similar.

The results highlight the limitations of current AI detection methods in the face of advanced language models.

Robot dining at restaurant table with food and juice.

Implications for Review Platforms and Beyond

The findings have far-reaching implications for review platforms, businesses, and consumers.

Unscrupulous actors could exploit AI to generate fake reviews, eroding trust in online platforms. Small businesses that rely heavily on genuine reviews may be disproportionately affected.

The study serves as a wake-up call for review platforms to rethink their authentication mechanisms and for policymakers to consider regulatory action to enforce transparency.

“The finding that large language models (LLMs) can cheaply and quickly generate online review texts indistinguishable from those written by humans,” Kovács writes, “has wide-ranging implications.”

As Kovács concludes, “Once consumers understand the ability of LLMs to generate authentic-looking reviews quickly and cheaply, it will likely lead them to second-guess whether a review was written by a person or an AI.”

Study Details:

Title: The Turing test of online reviews: Can we tell the difference between human-written and GPT-4-written online reviews?
Author: Balázs Kovács
Published: April 12, 2024
Published in: Marketing Letters

Example Restaurant Reviews

Below are three sets of examples mentioned in the paper:

Example 1:

Human-written review: “Worst FSU game watching site. DJ would rather play crap music vs have the sound on for the game. If you’re going to commit to FSU for games, especially as an Alum, get your staff right”

AI-generated counterpart: “Terrible spot for FSU games! The DJ’s obsessed with his tunes, ignoring the game. Why bother being an FSU bar if you won’t play the game sound? Alumni like me, we expect better—train your people!”

Example 2:

Human-written review: “Great diner for breakfast & lunch! My kids love the krispy pancakes and hash Browns. We use to get meatball sandwich all the time, our favorite but it’s no longer on the menu. Please bring it back!”

AI-generated counterpart: “Love this spot for brunch vibes! The kiddos devour those crispy pancakes & the hashies. Totally miss their meatball sub though – was our go-to! Wish they’d bring it back ASAP!!”

Example 3:

Human-written review: “The Blind Onion is one of our favorites. Excellent pizza crust and yummy wings!! Cheesy bread is very yummy too!”

AI-generated counterpart: “Absolutely adore the Blind Onion! Their pizza dough? TO DIE FOR – crispy and just perfect. Wings are just delish, every single time. And don’t even get me started on the cheezy bread – A MUST! 10/10 would recommend!!”

These examples demonstrate the high quality and human-like nature of the AI-generated reviews, which include elements like informal language, emphatic punctuation, and even intentional misspellings to mimic authentic human writing styles.