News Release

Bots actually target and pursue individual influencers

New research from USC researchers indicates bots are strategically targeting influencers and upping violent comments

Peer-Reviewed Publication

University of Southern California

Summary: New research co-authored by assistant research professor and associate director of Informatics at the University of Southern California Department of Computer Science, Emilio Ferrara, looks at "social hacking" over social networks that can increase violent commentary and can affect voting behavior.

Where: The paper, "Bots Increase Exposure to Negative and Inflammatory Content in Online Social Systems" is published today in the Proceedings of National Academy of Sciences of the United States of America.

Method and Context:

The researchers believe this study to be one of the first studies to investigate the content bots generated and the specific strategy of the bots. In reviewing nearly 4 million posts on Twitter, researchers from Fondazione Bruno Kessler and the University of Southern California Viterbi School of Engineering, attempted to understand online behavioral dynamics, the type of content bots shared, and which users they targeted in the context of Catalonia's referendum on independence.

The researchers discovered that influencers who supported Catalan's independence were specifically targeted by the bots and became over 100 times more likely to engage with them. These influencers were also exposed to negative and violent narratives pursued by the bots.

Findings:

  • The authors' findings contrast with earlier studies assuming bots were sharing information without specific strategies

  • Bots, the researchers say, are selecting and pursuing specific targets

  • Bots tend to generate negative content aimed at polarizing highly influential human users to exacerbate social conflict

  • Looking at the election in Spain, it was determined human influencers who were targeted by bots, did not recognize they were being targeted and being bombarded by non-human actors

Quotes from study co- author Emilio Ferrara:

  • "As we study this events, bots are so pervasive that anyone can be a target."

  • "Every user is exposed to this either directly or indirectly because bot-generated content is nowadays very pervasive"

  • "This is so endemic in online social systems... no one can tell if they are being manipulated."

  • We need beyond the technical solutions to this problem. We need regulation, laws and incentives that will force social media companies to regulate their platforms."

###

Media Availability: Please reach out to co-author Emilio Ferrara directly for media inquiries emiliofe@usc.edu


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.