Researchers use AI chatbot to change conspiracy theory beliefs

Researchers Use Ai Chatbot To Change Conspiracy Theory Beliefs

Around 50% of Americans believe in conspiracy theories of one type or another, but MIT and Cornell University researchers think AI can fix that.

In their paper, the psychology researchers explained how they used a chatbot powered by GPT-4 Turbo to interact with participants to see if they could be persuaded to abandon their belief in a conspiracy theory.

The experiment involved 1,000 participants who were asked to describe a conspiracy theory they believed in and the evidence they felt underpinned their belief.

The paper noted that “Prominent psychological theories propose that many people want to adopt conspiracy theories (to satisfy underlying psychic “needs” or motivations), and thus, believers cannot be convinced to abandon these unfounded and implausible beliefs using facts and counterevidence.”

Could an AI chatbot be more persuasive where others failed? The researchers offered two reasons why they suspected LLMs could do a better job than you of convincing your colleague that the moon landing really happened.

LLMs have been trained on vast amounts of data and they’re really good at tailoring counterarguments to the specifics of a person’s beliefs.

After describing the conspiracy theory and evidence, the participants engaged in back-and-forth interactions with the chatbot. The chatbot was prompted to “very effectively persuade” the participants to change their belief in their chosen conspiracy.

The result was that on average the participants experienced a 21.43% decrease in their belief in the conspiracy, which they formerly considered to be true. The persistence of the effect was also interesting. Up to two months later, participants retained their new beliefs about the conspiracy they previously believed.

The researchers concluded that “many conspiracists—including those strongly committed to their beliefs—updated their views when confronted with an AI that argued compellingly against their positions.”

They suggest that AI could be used to counter conspiracy theories and fake news spread on social media by countering these with facts and well-reasoned arguments.

While the study focused on conspiracy theories, it noted that “Absent appropriate guardrails, however, it is entirely possible that such models could also convince people to adopt epistemically suspect beliefs—or be used as tools of large-scale persuasion more generally.”

In other words, AI is really good at convincing you to believe the things it is prompted to make you believe. An AI model also doesn’t inherently know what is ‘true’ and what isn’t. It depends on the content in its training data.

The researchers achieved their results using GPT-4 Turbo, but GPT-4o and the new o1 models are even more persuasive and deceptive.

The study was funded by the John Templeton Foundation. The irony of this is that the Templeton Freedom Awards are administered by the Atlas Economic Research Foundation. This group opposes taking action on climate change and defends the tobacco industry, which also gives it funding.

AI models are becoming very persuasive and the people who decide what constitutes truth hold the power.

The same AI models that could convince you to stop believing the earth is flat, could be used by lobbyists to convince you that anti-smoking laws are bad and climate change isn’t happening.