News Release

How to survive the explosion of AI slop

Peer-Reviewed Publication

PNAS Nexus

Farid deepfake

image: 

A deepfake in which the author inserted his own face (source in upper left) into an AI-generated image of an inmate in an orange jumpsuit.

view more 

Credit: AI image created by Hany Farid

In a Perspective, Hany Farid highlights the risk of manipulated and fraudulent images and videos, known as deepfakes, and explores interventions that could mitigate the harms deepfakes can cause. Farid explains that visually discriminating the real from the fake has become increasingly difficult and summarizes his research on digital forensic techniques, used to determine whether images and videos have been manipulated. Farid celebrates the positive uses of generative AI, including helping researchers, democratizing content creation, and, in some cases, literally giving voice to those whose voice has been silenced by disability. But he warns against harmful uses of the technology, including non-consensual intimate imagery, child sexual abuse imagery, fraud, and disinformation. In addition, the existence of deepfake technology means that malicious actors can cast doubt on legitimate images by simply claiming the images are made with AI. So, what is to be done? Farid highlights a range of interventions to mitigate such harms, including legal requirements to mark AI content with metadata and imperceptible watermarks, limits on what prompts should be allowed by services, and systems to link user identities to created content. In addition, social media content moderators should ban harmful images and videos. Furthermore, Farid calls for digital media literacy to be part of the standard educational curriculum. Farid summarizes the authentication techniques that can be used by experts to sort the real from the synthetic, and explores the policy landscape around harmful content. Finally, Farid asks researchers to stop and question if their research output can be misused and if so, whether to take steps to prevent misuse or even abandon the project altogether. Just because something can be created does not mean it must be created. 


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.