News Release

AI-generated images of depression depict more stereotypes and arouse greater stigmatization

So determines a study by UPF that analyzed the opinions of associations of people suffering from depression, young people and professionals of science and health communication

Peer-Reviewed Publication

Universitat Pompeu Fabra - Barcelona

Images generated using artificial intelligence (AI) depict more stereotypes and stigmas around depression than images used by the media to illustrate the disease. This is the main conclusion of a study on the perception held by different groups –including associations of patients, young people and communication professionals– of the images used by the media when talking about depression. “The images generated by AI depict more concepts related to stigma such as marginalization or social exclusion”, warns Núria Saladié, first author of the study and a member of the Science, Communication and Society Studies Centre (CCS) at Pompeu Fabra University (UPF). According to the authors, in order to convey news about mental health responsibly, avoiding reproducing stereotypes, there is a need to understand that technology is not neutral and to take into account the recommendations issued by patient associations.  

AI-generated images tend to depict people alone, in the shadows or against a backlight, with their faces hidden and without taking part in any activity. This accentuates stereotypes and stigma and has a negative effect on people suffering from depression. So determines a study published in the journal JMIR Human Factors, which has examined the perception of different groups of the population –including patient associations, young people and communication professionals– of the images used by the media to depict the disease. 

Many AI-generated images do not reflect the diversity of experiences associated with the disease”, explains Carolina Llorente, also an author of the study and a researcher at the CCS-UPF. Llorente highlights that “being able to take into account the vision of people who have experienced the disease up close has been one of the most valuable aspects to avoid perpetuating stereotypes”.  

The study also reveals that when people know that the image has been generated by AI they are more critical than when they do not, which suggests that transparency around the use of AI can influence the way these representations are interpreted. “AI is already being used –and will be increasingly used– in mental health communication”, Saladié explains. And she adds, “If we want this communication to be responsible, we require a more careful and critical approach to the use of AI”. 

To be able to communicate news about mental health responsibly, avoiding stereotypes, it must be understood that “AI tools do not generate images neutrally: they respond to the instructions they receive. Therefore, it is important to think carefully about the prompts and review the results critically”, points out Gema Revuelta, director of the CCS-UPF and leader of the study, which concludes that “improving the quality of visual representations related to depression depends on teamwork when pooling the vision and knowledge of patient organizations, mental health experts, science journalists, AI developers and researchers”. 


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.