New study finds that short dialogues with AI chatbots can reduce people’s belief in conspiracy theories by 20%

A new study finds that AI chatbots using custom-tailored dialogues managed to significantly reduce the beliefs held by conspiracy theorists, with effects still persisting two months later.

In a new study, researchers from MIT and Cornell have found that just a few brief conversations with an AI chatbot can reduce a person’s belief in conspiracy theories by around 20% on a 100-point scale, with the effects lasting up to two months.

These findings suggest that personalized, interactive debunking can be a powerful tool in the fight against misinformation.

Challenging Deep-Rooted Beliefs with Personalized Dialogues

The study, conducted by Thomas Costello and David Rand of MIT and Gordon Pennycook of Cornell University, involved 2,190 participants across two experiments.

The study specifically recruited participants who already believed in conspiracy theories, rather than a random sample of the general population.

In the screening phase of the study, participants were asked to describe a conspiracy theory they found compelling and to provide reasons why they believed in it.

To quantify the participants’ belief in conspiracy theories, the researchers used a 100-point scale.

The participants rated their belief in their chosen conspiracy from 0 (“Definitely False”) to 100 (“Definitely True”), with 50 indicating uncertainty.

Only those who provided a genuine conspiracy theory and rated their belief in that theory above the midpoint of the belief scale (i.e., above 50 on the 100-point scale) were included in the study.

Participant Demographics

The participants were recruited through CloudResearch’s Connect platform.

The sample included about 48% men and 52% women, aged 18-80, with an average age of 46 in Study 1 and 42 in Study 2.

Participants’ education levels varied, with the most common categories being “Some College” and “Bachelor’s Degree.”

The study also collected data on participants’ political affiliation, religiosity, and other demographic factors to examine the potential influence of these factors on the effectiveness of the AI-led debunking.

The sample was matched to U.S. census demographics for age, gender, race, and ethnicity to ensure representativeness.

The participants all completed the experiment from their own devices, in their chosen environment.

What Do You Believe To Be True?

The participants were asked to describe a conspiracy theory that they believed in, and to provide supporting evidence for that theory.

These conspiracies spanned a wide range of topics, from COVID-19 being a bioweapon to the moon landings being faked.

Other popular theories endorsed by the participants included 9/11 being an inside job, the Illuminati secretly controlling world events, and Princess Diana’s death being orchestrated by the royal family.

Participants then engaged in a three-round conversation with a chatbot using GPT-4, an AI model developed by OpenAI.

The chatbot used the specific information provided by each participant to generate personalized counter-arguments and evidence debunking their specific conspiracy theory.

Lasting Impact Across Demographics

The results were striking: participants who chatted with the AI showed reduced their belief in the given conspiracy theory by 20% on average.

This effect held true across all age groups, education levels, and political ideologies; likewise, factors such as religion, ethnicity, age, and political orientation did not significantly influence how strongly their beliefs changed.

In the first experiment, involving 774 participants, pre-chat belief in the given conspiracy averaged 83.8 points.

After the AI conversations, these beliefs dropped by an average of 16.5 points, a 21.4% decrease.

The second experiment, with 1,553 participants, replicated these findings.

The conspiracy belief levels dropped by an average 12.4 points, or 19.4%.

More than a quarter of the participants became uncertain about their conspiracy belief after the conversations.

And the impact of the AI conversations also extended beyond the targeted conspiracy: participants’ belief in other, unrelated conspiracy theories also decreased, suggesting a shift in overall conspiratorial thinking.

The participants also reported increased intentions to unfollow social media accounts promoting conspiracies, and to challenge conspiracy believers in discussions.

AI as a Tool Against Misinformation

“We find robust evidence that the debunking conversation with the AI reduced belief in conspiracy theories by roughly 20%,” the authors write. “This effect did not decay over 2 months time, was consistently observed across a wide range of different conspiracy theories, and occurred even for participants whose conspiracy beliefs were deeply entrenched and of great importance to their identities.”

The study’s findings challenge the notion that conspiracy believers are unwaveringly resistant to counterevidence.

By engaging believers in interactive, personalized dialogue, AI chatbots were able to deliver compelling, targeted debunking that led to meaningful, lasting belief change.

“These findings profoundly challenge the view that evidence and arguments are of little use once someone has gone down the rabbit hole,” the authors conclude. “Instead, our findings are more consistent with an alternative theoretical perspective whereby epistemically suspect beliefs – such as conspiracy theories – primarily arise due to a failure to engage in reasoning, reflection, and careful deliberation.”

The researchers point out that AI chatbots must be developed and deployed responsibly to ensure they are used to combat, rather than spread, misinformation.

They call for further research to optimize AI-based interventions and explore their long-term impacts.

This study highlights how even people who strongly believe in conspiracy theories can change their minds when presented with sufficiently compelling evidence tailored to their specific beliefs.

And AI chatbots offer “a promising new tool,” the authors write, “for delivering such personalized debunking at scale.”

Study Details