image: World-first tool reduces harmful engagement with AI-generated explicit images. Pictured left to right are UCC School of Applied Psychology researchers Dr Conor Linehan; John Twomey, lead researcher of Deepfakes/Real Harms; and Dr Gillian Murphy.
Credit: Image: University College Cork
- World’s first research-backed intervention reduces harmful engagement with AI-generated explicit imagery.
- As the Grok AI-undressing controversy grows, researchers say user education must complement regulation and legislation.
- Study links belief in deepfake pornography myths to higher risk of engagement with non-consensual AI imagery.
Friday, 16 January 2026: A new evidence-based online educational tool aims to curb the watching, sharing, and creation of AI-generated explicit imagery.
Developed by researchers at University College Cork (UCC), the free 10-minute intervention Deepfakes/Real Harms is designed to reduce users’ willingness to engage with harmful uses of deepfake technology, including non-consensual explicit content.
In the wake of the ongoing Grok AI-undressing controversy, pressure is mounting on platforms, regulators, and lawmakers to confront the rapid spread of these tools. UCC researchers say educating internet users to discourage engagement with AI-generated sexual exploitation must also be a central part of the response.
False myths drive participation in non-consensual AI imagery
UCC researchers found that people’s engagement with non-consensual synthetic intimate imagery, often and mistakenly referred to as “deepfake pornography”, is associated with belief in six myths about deepfakes. These include myths such as the belief that the images are only harmful if viewers think they are real, or that public figures are legitimate targets for this kind of abuse.
The researchers found that completing the free, online 10-minute intervention, which encourages reflection and empathy with victims of AI imagery abuse, significantly reduced belief in common deepfake myths and, crucially, lowered users’ intentions to engage with harmful uses of deepfake technology.
Using empathy to combat AI imagery abuse at its source
The intervention has been tested with more than two thousand international participants of varied ages, genders, and levels of digital literacy, with effects evident immediately at a follow-up weeks later.
The intervention tool, called Deepfakes/Real Harms, is now freely available at https://www.ucc.ie/en/deepfake-real-harms/.
Lead researcher John Twomey, UCC School of Applied Psychology, said: “There is a tendency to anthropomorphise AI technology – blaming Grok for creating explicit images and even running headlines claiming Grok “apologised” afterwards. But human users are the ones deciding to harass and defame people in this manner. Our findings suggest that educating individuals about the harms of AI identity manipulation can help to stop this problem at source.”
Dr Gillian Murphy, UCC School of Applied Psychology and research project Principal Investigator, said: “Referring to this material as ‘deepfake pornography’ is misleading. The word ‘pornography’ generally refers to an industry where participation is consensual. In these cases, there is no consent at all. What we are seeing is the creation and circulation of non-consensual synthetic intimate imagery, and that distinction matters because it captures the real and lasting harm experienced by victims of all ages around the world.”
“This toolkit does not relieve platforms and regulators of their responsibilities in tackling this appalling abuse, but we believe it can be part of a multi-pronged approach. All of us – internet users, parents, teachers, friends and bystanders – can benefit from a more empathetic understanding of non-consensual synthetic imagery,” Dr Murphy said.
Dr Conor Linehan, UCC School of Applied Psychology, said: “With this project, we are building on our previous work in the area of responsible software innovation. We propose a model of responsibility that empowers all stakeholders, from platforms to regulators to end users, to recognise their power and take all available action to minimize harms caused by emerging technologies.”
Reducing intentions to engage in harmful deepfake behaviours
Feedback from those who have completed the intervention includes:
“I think it was very useful to show that deepfakes can be damaging even if people know they aren't real. Too much of the deepfake discourse focuses on people being unable to tell them apart from reality when that's only part of the issue.”
“What stood out as good about this is that it didn’t come across as judgmental or preachy—it was more like a pause button. It gave space to think about the human side of the issue without making anyone feel attacked. … Instead of just pointing fingers, it gave you a chance to reflect and maybe even empathize a little, which can make the message stick longer than just being told, ‘Don’t do this’.”
Deepfakes/Real Harms is launched as part of UCC Futures - Artificial Intelligence and Data Analytics.
Professor Barry O’Sullivan, Director of UCC Futures - Artificial Intelligence and Data Analytics and member of the Irish Government’s Artificial Intelligence Advisory Council, said: “As we work towards a future of living responsibly with artificial intelligence, there is an urgent need to improve AI literacy across society. As my colleagues at UCC have demonstrated with this project, this approach can reduce abuse perpetration and combat the stigma faced by victims.”
This project is funded by Lero, the Research Ireland Centre for Software.
ENDS