A University of Texas at Arlington-led team is building computer tools to detect social bots within the worldwide web that create and spread fake news.
Chengkai Li, UTA associate professor in the Department of Computer Science and Engineering, is leading the project, which is funded through the Texas National Security Network Excellence Fund out of the University of Texas at Austin's Clements Center for National Security and The Robert S. Strauss Center for International Security and Law.
The team will use highly sophisticated algorithms to combat the bots and spread of fake news.
Co-principal investigators for the project include: Christoph Csallner, UTA associate professor in the Department of Computer Science and Engineering; Mark Tremayne, UTA assistant professor in the Department of Communication; Zhiqiang Lin, UT Dallas associate professor of computer science; and Angela Lee, UT Dallas assistant professor of emerging media and communication.
The project, titled "Bot vs. Bot: Automated Detection of Fake News Bots," will focus Twitter accounts. Bots, in the context of Twitter, are Twitter accounts run by computer programs that automatically publish and forward content, follow other accounts, leave comments and conduct seemingly "real" activity.
"This is a seed grant that we hope will lead to a much larger grant that will identify these bots for social media users," Li said. "Right now, you don't know what is coming from a real person and what's coming from a computer, sometimes for malicious, or at least, misleading reasons."
Li said the project's focus is on national security threats rather than domestic politics.
"These bots often are sponsored by nation states that are hostile to U.S. interests," Li said. "This project needs to have a worldwide reach."
Csallner said the project's aim is to create computer programs that distinguish bot from human.
"For example, even if a bot uses high-end artificial intelligence and massive processing power, an extremely simple detection technique may be enough if the bot always posts at the same time of day or has some other trait that makes it easy to distinguish the bot from humans," Csallner said.
Tremayne said what makes the team's task especially difficult is that many times fake news' birth has some real facts contained in a report.
"You might find that a bot takes a piece of real and true information, then adds an element that isn't true. So, in the end, you have different levels of fake news," Tremayne said.
He said these bots are just the latest fake news. Tremayne said propaganda and the days of Yellow Journalism more than a century ago also fall under the larger category of fake news.
Li said the interdisciplinary nature of the project ensures looking at the issue from all angles.
"We will leverage our research expertise in computational fact-checking, static and dynamic code analysis, data mining and security," Li said. "The work will be grounded by communication and journalism principles. We will conduct experiments to better understand the interaction between bots and news consumption behaviors and effects. By putting together a team of computer scientists and social science scholars, this project, seeks to advance our understanding of fake-news bots and our capability of countering it."
This isn't Li's first venture into this type of research.
Collaborating with researchers from Duke and Stanford universities, Li used National Science Foundation and Knight Foundation grants to build ClaimBuster, an automated tool that assists fact-checkers in finding important factual claims to check. ClaimBuster has been used to aid in checking the veracity of statements said by candidates in U.S. presidential debates.
Tremayne also is a participant on the ClaimBuster project and a collaborator on the NSF grant. His research interests are in the transformation of journalism and mass communication in the digital era. He published research on the controversial use of unmanned aerial vehicles in journalism and mass communication, also known as drone journalism.
Csallner's research emphasis relates to software engineering, and specifically includes program analysis, automated bug finding, software security and cell phone software development.
This project embodies the data-driven discovery platform under UTA's Strategic Plan 2020: Bold Solutions | Global Impact.