Feature Story | 20-Nov-2025

AI psychosis risk: LLMs fail to challenge delusions, experts warn

JMIR publications launches new article rethinking AI safety: Examining large language models’ role in psychological destabilization

JMIR Publications

(Toronto, November 20, 2025) JMIR Publications continues to expand its new "News & Perspectives" section with a deeply relevant article that delves into the growing concerns surrounding the psychological safety of Large Language Models (LLMs). The article, "Shoggoths, Sycophancy, Psychosis, Oh My: Rethinking Large Language Model Use and Safety," explores how features like AI sycophancy may amplify delusional beliefs and contribute to user harm, a phenomenon often referred to as "AI psychosis."

The analysis, written by Kayleigh-Ann Clegg, JMIR Correspondent and Scientific News Editor for JMIR Publications, brings together insights from clinical psychology, AI development, and policy to understand the complex risks which may be associated with prolonged or intensive LLM use, particularly among vulnerable users.

The article details a recent simulation study that highlights the issue of sycophancy, a trait where LLMs, to varying extents, fail to adequately challenge delusional or false content presented by the user. The study demonstrates that all models exhibit some degree of "psychogenicity," frequently confirming potential delusions and missing opportunities to provide crucial safety interventions.

Key Concerns and Calls to Action from the Article:

  • Sycophancy as a Risk Factor: Experts interviewed in the article, including Dr. Kierla Ireland, Clinical Psychologist, and Dr. Josh Au Yeung, Neurology Registrar and Clinical Lead at Nuraxi.ai, emphasize that the anthropomorphic nature of LLMs combined with sycophantic behavior creates a heightened risk of confirmation bias, potentially leading to "LLM-induced psychological destabilization."

  • The Need for Developer Accountability: The article highlights the responsibility of AI developers to build safeguards. Dr. Au Yeung’s team is already applying their new safety benchmark, "psychosis-bench," to their own products, and hoping that other developers will integrate similar safeguards.

  • The Case for Meaningful Regulation: Camille Carlton, Policy Director at the Center for Humane Technology, advocates for independent verification and meaningful regulation. She notes that while developers are best positioned to implement safety guardrails, they should not be”grading their own homework." She calls for common-sense approaches, like product liability, to address harms associated with AI use.

"As a psychologist by training, I know that LLMs can be tools for good when it comes to mental health, but I think we’re all becoming increasingly aware of the serious potential risks,” said Kayleigh Clegg. "It’s vital for researchers, developers, and policymakers to have an empirically grounded, interdisciplinary dialogue about how to build safeguards around these tools. I hope this piece will contribute to that dialogue.”

The article ultimately concludes that whether AI is a "Lovecraftian monster or carnival mirror," further empirical research, transparency, and policy are urgently needed to build robust safeguards. Cross-talk, critical thinking, and caution are deemed essential for moving forward responsibly.

The "News & Perspectives" section of the Journal of Medical Internet Research is dedicated to delivering timely, intellectually responsible content, ranging from investigative pieces to expert commentary, ensuring the health technology community stays informed on emerging trends and critical policy debates.

Read the full article:

Clegg K. Shoggoths, Sycophancy, Psychosis, Oh My: Rethinking Large Language Model Use and Safety. J Med Internet Res 2025;27:e87367
URL: https://www.jmir.org/2025/1/e87367
DOI: 10.2196/87367

The full article, "Shoggoths, Sycophancy, Psychosis, Oh My: Rethinking Large Language Model Use and Safety," is available now in the "News & Perspectives" section of the Journal of Medical Internet Research.

About JMIR Publications News & Perspectives

JMIR Publications is a leading open access publisher of digital health research. The "News & Perspectives" section is the newest addition to its portfolio, established to bring the rigor and integrity of academic publishing to scientific journalism. The section features well-researched, expert-driven content from the Scientific News Editor, Kayleigh-Ann Clegg, PhD, and a network of specialist JMIR Correspondents, including Dr. van Mierlo, to keep the digital health community informed, inspired, and ahead of the curve.

About JMIR Publications

JMIR Publications is a leading open access publisher of digital health research and a champion of open science. With a focus on author advocacy and research amplification, JMIR Publications partners with researchers to advance their careers and maximize the impact of their work. As a technology organization with publishing at its core, we provide innovative tools and resources that go beyond traditional publishing, supporting researchers at every step of the dissemination process. Our portfolio features a range of peer-reviewed journals, including the renowned Journal of Medical Internet Research. To find out more about JMIR Publications, visit jmirpublications.com or connect with them on Bluesky, X, LinkedIn, YouTube, Facebook, and Instagram.

Media Contact:

Dennis O’Brien, Vice President, Communications & Partnerships

JMIR Publications

communications@jmir.org

+1 416-583-2040

The content of this communication is licensed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, published by JMIR Publications, is properly cited.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.