ITHACA, N.Y. – As the saying goes, “You catch more flies with honey than with vinegar.” But in social media commentary, vinegar seems to be the tone of choice.
In two recently published papers, Cornell University researchers identified seven distinct strategies commenters employ when objecting to content online, noting that reputational attacks, or “vinegar,” are most common while “honey,” in the form of moral appeals to the offender, are less common.
In two randomized experiments on a simulated social media platform, the researchers found that comments promoting “restorative justice” – encouraging offenders to apologize, rather than shaming and blaming them – were seen as more effective at achieving justice and were met with more support from the greater online community.
“Many people tend to think it’s only a small minority that comment on public social media posts like news stories, and while that might be true on the whole, many more people do read these comments,” said Ashley Shea, a Ph.D. candidate in the field of communication. “The comments have an impact on how we assess the tenor and even credibility of the original news story and can influence how others choose to participate. Therefore, these public comment spaces do play a civic function, whether we like it or not.”
The research was outlined in two papers, in PLOS One and PNAS Nexus.
As people become increasingly siloed in terms of how and where they get their news, and with social media offering a way to anonymously provide commentary on the day’s headlines, comment sections are fertile ground for “cross-talk” – the back and forth between commenters that often degenerates into name-calling or worse.
“In these situations, comment sections have been described as a ‘battleground,’ where people are trying to exert power and control over one another,” Shea said. “But we found that there hadn’t been a large-scale observation to really see what is happening in these spaces. How are people trying to exert control? What discursive tactics are they using?”
Over the course of several months in fall 2021, Shea and the team analyzed more than 8,500 comment replies to trending news videos on YouTube and Twitter (now X) and identified seven distinct discursive objection strategies – the idea, Shea said, that “someone is making an attempt to restrict the ways that others speak.”
They examined the frequency of each strategy’s occurrence – Shea noted that only about 10% of all comments qualified as discursive objection – and found that reputational attacks are the most common.
“You might think it’s a small percentage of overall comments, and thus not that problematic, but within that 10%, ad hominem attacks are clearly the largest percentage of tactics being employed,” Shea said. “But we also saw strategies that were pro-social, like the use of logic or moral appeals.”
That spurred the second study, led by Pengfei Zhao, a former member of the Social Media Lab, to find out how effective those pro-social strategies could be. Zhao found that, although posts seeking to punish the offending commenter – known as retributive vigilance – were most common, posts that promoted healing were viewed more favorably, and spurred more supportive behaviors from the community (increased upvotes and reduced downvotes). They also enhanced community members’ satisfaction and future engagement intentions.
There is one caveat, however: When the offender is viewed as morally incorrigible – incapable of being reformed – then retribution is seen as the more favorable approach.
“If people think that the offender cannot change their future wrongdoing,” Zhao said, “then appealing to moral values, inviting them to apologize, won’t work. Then people will resort to retribution; they think this person deserves punishment.”
Both studies were supported by grants from the National Science Foundation.
For additional information, see this Cornell Chronicle story.
-30-
Journal
PLOS One