Skip to main content

Most People Don’t Actively Seek to Share Fake News

New research shows that subtly nudging people to think about accuracy increases the quality of the news they share

Credit:

Getty Images

Fake news and misinformation have been a persistent concern ever since the 2016 U.S. presidential election. Despite increased awareness and (apparent) concern from social media companies, the problem has not seemed to dissipate. For example, false content about COVID-19 has proliferated, which has likely had an impact on vaccination intentions, and misinformation about the 2020 presidential election almost certainly played a key role in the storming of Capitol Hill on January 6.

It’s tempting to conclude that we’re in a “post-truth” world where people are either unable to distinguish fact from fiction or are willfully ignorant and purposefully share falsehoods. This is not an idle curiosity. If true, our democracies are in very big trouble, and perhaps the only option we have is accept (and even beg for) strict censorship of falsehood by social media companies.

This may come as a surprise, but new work just published in Nature offers a strong challenge to this view.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Our research finds that most people do not wish to share inaccurate information (in fact, over 80 percent of respondents felt that it’s very important to only share accurate content online) and that, in many cases, people are fairly good (overall) at distinguishing legitimate news from false and misleading (hyperpartisan) news. Research we’ve conducted consistently shows that it’s not partisan motivations that lead people to fail to distinguish between true and false news content, but rather simple old lazy thinking. People fall for fake news when they rely on their intuitions and emotions, and therefore don’t think enough about what they are reading—a problem that is likely exacerbated on social media, where people scroll quickly, are distracted by a deluge of information, and encounter news mixed in with emotionally engaging baby photos, cat videos and the like.

This means that when thinking about the rise of misinformation online, the issue is not so much a shift in people’s attitudes about truth, but rather a more subtle shift in attention to truth. There’s a big disconnect between what people believe and what they share. For example, in one study, some participants were asked if they would share various headlines, while other participants were asked to judge the headlines’ accuracy. Among the false headlines, we found that 50 percent more were shared than were rated as accurate. The question, then, is why.

To state the obvious: Social media platforms are social. This focuses our attention on social concerns, such as how much engagement our posts will get, how much our friends will enjoy them, or what they communicate about our identities, and so on. These considerations may distract us from even considering whether or not news content is accurate before we share it. This is surely facilitated by the fact that social media algorithms are optimized for engagement instead of truth.

But there’s good news here!

If the current design of social media platforms draws our attention away from accuracy, then it should be possible for alternative designs to shift attention back to truth. Such an approach would represent a fundamental shift in the way that misinformation is fought online. Unlike standard fact-checking or educational approaches (which, to be clear, are also highly valuable), prompts that nudge people to think about accuracy do not require fact-checkers to keep up with the constant stream of falsehoods being produced. Nor does it require users to invest substantial amounts of time engaging with educational tools. Rather, shifting attention to accuracy may be as simple as asking people a single question.

We conducted a large field experiment on Twitter where we sent a simple accuracy prompt to over 5,000 users who had recently shared links from Breitbart or Infowars. Our intervention did not provide novel information, nor did it prescriptively ask people to be more accurate or be vigilant about fake news. Instead, we simply asked them for their opinion about the accuracy of a single nonpolitical news headline. We didn’t expect them to actually respond to our question; our goal was to remind people about the concept of accuracy (which, again, the vast majority of people believe to be important) by simply asking about it .

We found that being asked the single accuracy question improved the average quality of news sources the users subsequently shared on Twitter. For example, it reduced the fraction of their retweets in the next 24 hours that contained links to Breitbart and increased the fraction of retweets to sites like the New York Times and CNN.

We have found similar results in a variety of survey experiments. For example, in follow-up work with researchers at Jigsaw, a Google unit, we’ve found that numerous different approachesfor shifting attention to accuracy are effective at increasing the quality of news people will share. We’ve also found that this works for news about COVID-19 as well as politics, and that it’s effective for both Democrats and Republicans.

Accuracy prompts are certainly not going to solve the whole misinformation problem. But they represent a novel tool that platforms can leverage to get ahead of misinformation, instead of only playing catch-up by fact-checking falsehoods after they’ve been shared or censoring once things get out of hand. Of course, this accuracy-prompt perspective requires them to embrace a different way of thinking about the misinformation problem: that it is driven by the underlying psychology of their users, the worst of which is often brought out by social media. This means that designing effective interventions must take psychology and cognitive science into account.

Importantly, social media platforms aren’t the only ones for whom accuracy prompts are relevant. Non-profit and civil society organizations can also employ this approach, for example by using targeted ads to deliver the prompts to users most at risk of sharing misinformation (e.g., older people, or people visiting known misinformation sites)—an approach we’ve taken in collaboration with Reset, an initiative run by Luminate.

Each of us, as individual citizens, can also help improve the nature of our online discourse. This is twofold: You can pass along the idea that people are often distracted from accuracy, and that it’s important to stop and think about whether something is true before you share it. But, of course, it’s equally important to actually do this ourselves when we’re sharing content with the world. 

We are living in a time where misinformation is a major concern for almost everyone—even many of the people who (accidentally) share it. We ourselves, researchers working on this very topic, have fallen into the trap and shared inaccurate content without thinking. But understanding that this is more a problem of inattention than purposeful bad behavior makes things seem less grim, helps us see past the illusion that everyday citizens on the other side must be either stupid or evil, and leads to concrete solutions.