AI   Trust  

Using Alternative Sources to Decrease Partisan Biases in the Fact-Checking Process

Misinformation – false information that is unintentionally presented as fact – is a major issue in modern society.

The current media landscape, dominated as it is by social media, allows information of all kinds to spread widely and rapidly. While this can be hugely beneficial when it comes to providing access to information and fostering a diversity of voices and ideas, it also creates a crisis of credibility and trustworthiness. If anyone can say anything, how do you know that what they’re saying is trustworthy or accurate?

Fact-checking is one way that media platforms are trying to combat the spread of misinformation. While it seems like it would be useful to tell readers that a message is false or to directly correct inaccurate claims, studies into the impact of fact-checking suggest that it may not work all that well.

When presented with information that they don’t like or find threatening, people often engage in “motivated reasoning.” That is, they assess that information based on their preexisting beliefs and affiliations rather than using facts. Fact-checking messages that suggest that something someone wants to believe is false are likely to simply be ignored.

Researchers Won-ki Moon, University of Florida College of Journalism and Communications Advertising assistant professor, Myojung Chung from Northeastern University and S. Mo Jones-Jang from Boston College, conducted an online experiment to see if it was possible to reduce motivated reasoning in response to a political fact-checking message by changing the source of the message.

Participants were political partisans, identifying themselves as either Democrats or Republicans. They were randomized into two groups: one that read a simulated Facebook post that had misinformation about the Democratic Party, and one that read a simulated post with misinformation about the Republican Party. Participants then saw a fact-checking message informing them that the post they had just read had been checked and found to be false. This fact-checking message was presented as coming from one of four different sources: human experts, crowdsourcing, AI, or human experts in collaboration with AI. Participants were asked to rate the credibility of the fact-checking messages, indicating whether they found the message to be accurate, trustworthy, unbiased, and objective.

The study found evidence of motivated reasoning with fact-checking political messages. Democratic or Republican participants that saw fact-checking messages correcting negative information about the other party rated those messages as less credible.

Changing the source of the message, however, did lessen partisan biases. Fact-checking that was presented as being generated by AI or through crowdsourcing resulted in significantly reduced levels of motivated reasoning compared to using human experts. Human-AI collaboration affected participants differently depending on how strongly they believed that AI itself is unbiased, free from errors and incorruptible. Those with a strong belief in the fairness of AI used motivated reasoning much less than those with moderate or low levels of belief when the messages came from human-AI collaboration.

This study provides important insight into motivated reasoning with political messages. When partisan biases were in play, message content took a back seat. Fact-checking from human experts was the most trustworthy overall, but that didn’t seem to matter. Trustworthy or not, people were willing to disregard them all the same if they didn’t say what they wanted to hear. Perceived impartiality also wasn’t the same thing as credibility for participants. Crowdsourced fact-checking, for example, didn’t trigger motivated reasoning, but was also considered to be the least reliable source.

Overall, the research showed the potential for different fact-checking sources to reduce motivated reasoning, giving platforms a useful avenue to combat misinformation. While more work is going to be needed to find the sweet spot between message source and content to maximize the impact of fact-checking, technology may prove to be an important ally. That AI-assisted fact-checking was able to reduce motivated reasoning without compromising message credibility shows it to be a promising tool in the misinformation fight.

The original article, “AI as an Apolitical Referee: Using Alternative Sources to Decrease Partisan Biases in the Processing of Fact-Checking Messages,” was published online in Digital Journalism on Sept. 14, 2023.

Authors: Myojung Chung, Won-ki Moon, S. Mo Jones-Jang

This summary was written by Vaughan James, Ph.D.

Posted: September 29, 2023
Tagged as: , , , ,