News Release

Study shows users banned from social platforms go elsewhere with increased toxicity

Peer-Reviewed Publication

Binghamton University

BINGHAMTON, N.Y. -- Users banned from social platforms go elsewhere with increased toxicity, according to a new study featuring researchers from Binghamton University, State University of New York.

When people act like jerks on social media, one permanent response is to ban them from posting again. Take away the digital megaphone, the theory goes, and the hurtful or dishonest messages from those troublemakers won’t post a problem there anymore.

What happens after that, though? Where do those who have been “deplatformed” go, and how does it affect their behavior in future?

An international team of researchers — including Assistant Professor Jeremy Blackburn and PhD candidate Esraa Aldreabi from the Thomas J. Watson College of Engineering and Applied Science’s Department of Computer Science — explores those questions in a new study called “Understanding the Effect of Deplatforming on Social Networks.”

The research performed by iDRAMA Lab collaborators at Binghamton University, Boston University, University College London and the Max Planck Institute for Informatics in Germany was presented in June at the 2021 ACM Web Science conference.

Researchers developed a method to identify accounts belonging to the same person on different platforms and found that being banned on Reddit or Twitter led those users to join alternate platforms such as Gab or Parler where the content moderation is more lax.

Also among the findings is that, although users who move to those smaller platforms have a potentially reduced audience, they exhibit an increased level of activity and toxicity than they did previously.

“You can’t just ban these people and say, ‘Hey, it worked.’ They don’t disappear,” Blackburn said. “They go off into other places. It does have a positive effect on the original platform, but there’s also some degree of amplification or worsening of this type of behavior elsewhere.”

The deplatforming study collected 29 million posts from Gab, which launched in 2016 and currently has around 4 million users. Gab is known for its far-right base of neo-Nazis, white nationalists, anti-Semites and QAnon conspiracy theorists.

Using a combination of machine learning and human labeling, researchers cross-referenced profile names and content with users that had been active on Twitter and Reddit but were suspended. Many who are deplatformed reuse the same profile name or user info on a different platform for continuity and recognizability with their followers.

“Just because two people have the same name or username, that’s not a guarantee,” Blackburn said. “There was a pretty big process of going through creating a ‘ground truth’ data set, where we had a human say, ‘These have to be the same people because of this reason and that reason.’ That allows us to scale things up by throwing it into a machine learning classifier [program] that will learn the characteristics to watch for.”

The process was not unlike how scholars determine the identity of authors for unattributed or pseudonymous works, checking for style, syntax and subject matter, he added.

In the dataset analyzed for this study, about 59% of Twitter users (1,152 out of 1,961) created Gab accounts after their last active time on Twitter, presumably after their account was suspended. For Reddit, about 76% (3,958 out of 5,216) of suspended users created Gab accounts after their last post on Reddit.

Comparing content from the same users on Twitter and Reddit versus Gab, users tend to become more toxic when they are suspended from a platform and are forced to move to another platform. They also become more active, increasing the frequency of posts.

At the same time, the audience for Gab users’ content is curtailed by the reduced size of the platform compared to the millions of users on Twitter and Reddit. This might be seen as a good thing, but Blackburn cautioned that much of the planning for the Jan. 6 attack on the U.S. Capitol happened on Parler, a platform similar to Gab with a smaller user base that skews to the alt-right and far-right.

“Reducing reach probably is a good thing, but reach can be easily misinterpreted. Just because someone has 100,000 followers doesn’t mean they’re all followers in the real world,” he said.

“The hardcore group, maybe the group that we’re most concerned about, are the ones that probably stick with someone if they move elsewhere online. If by reducing that reach, you increase the intensity that the people who stay around are exposed to, it’s like a quality versus quantity type of question. Is it worse to have more people seeing this stuff? Or is it worse to have more extreme stuff being produced for fewer people?”

A separate study, “A Large Open Dataset from the Parler Social Network,” also included Blackburn among researchers from New York University, the University of Illinois, University College London, Boston University and the Max Planck Institute.

Presented at the AAAI Conference on Web and Social Media last month, it analyzed 183 million Parler posts made by 4 million users between August 2018 and January 2021, as well as metadata from 13.25 million user profiles. The data confirm that users on Parler — which briefly shut down and was taken off of Apple and Google app stores in response to the Capitol riot — overwhelmingly supported President Donald Trump and his “Make America Great Again” agenda.

“Regardless of what Parler might have said, publicly or not, it was very clearly white, right-wing, Christian Trump supporters,” Blackburn said. “Again, unsurprisingly, it got its largest boost right at the 2020 election — up to a million users joining. Then around the attack at the Capitol, there was another big bump in users. What we can see is that it was very clearly being used as an organization tool for the insurrection.”

So if banning users is not the right answer, what is? Reddit admins, for example, have a “shadow-banning” capability that allows troublesome users to think they’re still posting on the site, except no one else can see them. During the 2020 election and the COVID-19 pandemic, Twitter added content moderation labels to tweets that deliberately spread disinformation.

Blackburn is unsure about all the moderation tools that social media platforms have available, but he thinks there need to be more “socio-technical solutions to socio-technical problems” rather than just outright banning.

“Society is now fairly firmly saying that we cannot ignore this stuff — we can’t just use the easy outs anymore,” he said. “We need to come up with some more creative ideas to not get rid of people, but hopefully push them in a positive direction or at least make sure that everybody is aware of who that person is. Somewhere in between just unfettered access and banning everybody is probably the right solution.”


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.