News Release

On Twitter, false information travels farther and faster than the truth

Peer-Reviewed Publication

American Association for the Advancement of Science (AAAS)

An analysis of how true and false news stories spread on Twitter reveals that false news spreads substantially faster, and to far more people. Social media has created a boom in the spread of information, although little is known about how it has facilitated the spread of false information. Here, Soroush Vosoughi and colleagues analyzed the diffusion of verified true and false news stories via Twitter between 2006 and 2017. The data they analyzed included roughly 126,000 stories tweeted by 3 million people more than 4.5 million times. The stories were designated as true or false based on six independent fact-checking organizations that exhibited strong agreement on the classifications. In particular, they looked at the likelihood that a tweet would create a "cascade" of retweets. False information diffused significantly farther, faster, deeper, and more broadly than the truth across all categories of information, the authors report. Overall, falsehoods were 70% more likely to be retweeted than the truth. Whereas the truth rarely diffused to more than 1,000 people, the top 1% of false-news cascades routinely diffused to between 1,000 and 100,000 people. Of the various types of false news, political news was the most virulent, spreading at three times the rate of other false news topics. To probe whether Twitter users were more likely to retweet information that was considered "novel," Vosoughi et al. conducted an additional and rigorous analysis. Indeed, they report, false news that spreads fast is considered more novel; that novel information is more likely to be retweeted. In assessing the emotional content of tweets, they found that false stories inspired fear, disgust, and surprise in replies, whereas true stories inspired anticipation, sadness, joy, and trust. Lastly, when the authors used an algorithm to remove bots from their analysis, the results suggest that humans have a greater role than robots do in the dissemination of false news.

In a related Policy Forum, David Lazer et al. underscore the need to address the prevalence and sway of fake news, which they define as fabricated in­formation that mimics news content in form but not in organizational process or intent. (They prefer to use the term "fake" news as opposed to "false" news because the former term's "political salience draws attention to an important subject"). In particular, the spread of fake news has drawn much attention recently in a political context. In the U.S., political polarization has caused a dislike of the "other side," fostering an environment where fake news can attract a mass audience. The authors cite preliminary evidence quantifying the reach of fake news, with one conservative study estimating that the average American encountered between one and three stories from known publish­ers of fake news during the month before the 2016 U.S. presidential election. They also highlight the complexities of the human psyche, which prefers information that is familiar and supports one's own preexisting views, exacerbating the problem. To address the issue of fake news, the authors provide detailed recommendations on two key types of interventions - one that focuses on empowering individuals to evaluate the fake news they encounter, and a second that targets structural changes that aim to prevent exposing individuals to fake news. They call for an interdisciplinary research effort that involves various social media platforms, and for society at large to work to create a news ecosystem and culture that values and promotes truth.

###


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.