News Release

New “humane intelligence” framework guides safer, more patient-centered AI in older-adult mental health care

“Moral Grid Operational Index” translates the four pillars of Humane Intelligence into practical point-of-care safeguards

Peer-Reviewed Publication

Boston University School of Medicine

KEY TAKEAWAYS

  • Why it matters: AI already shapes access, triage, and support for older adults, but governance often focuses on algorithms and infrastructure rather than real patient and caregiver experience.
  • What’s new: A clinician-researcher-led framework (“Humane Intelligence”) plus a Moral Grid Operational Index that makes ethical principles operational with clear actions, oversight, and auditable evidence.
  • What it protects: Transparency patients can understand, consent appropriate to cognitive vulnerability, clinician accountability for high-stakes decisions, and monitoring of outcomes that matter (such as function, distress, caregiver burden, avoidable utilization, equity).

 

(Boston)—Artificial intelligence (AI) is increasingly used to identify older adults for services, support people between visits, and guide referrals and care pathways. Yet much AI governance still emphasizes algorithms and infrastructure rather than what older adults and caregivers actually experience -- especially in moments of vulnerability.

 

A Special Article in the American Journal of Geriatric Psychiatry led by Boston University Chobanian & Avedisian School of Medicine clinician-researcher Helen H. Kyomen, MD, MS, offers a new, geriatric psychiatry–led “Humane Intelligence” framework to help clinicians and health systems augment older-adult care with AI in ways that are safe, fair, and deeply human.

 

Humane Intelligence is a patient-centered, ethically attuned, clinically grounded relational framework for designing, evaluating, and monitoring AI in older-adult care. It rests on four pillars, Relational Intelligence, Transparency with Care, Reciprocity and Consent, and Ethical Governance in Strategic Regions, and applies them from point-of-care encounters to system-level decisions.

 

To translate these principles into day-to-day practice, the article introduces the Moral Grid Operational Index, which links each pillar to concrete, observable point-of-care behaviors and the kinds of evidence that show those behaviors occurred. It is intended to help teams evaluate who benefits, who is at risk, and what safeguards are needed as AI tools move from routine uses to higher-stakes settings.

 

“In simple terms, we gathered what is known, added clinical wisdom, and shaped it into tools people can actually use,” explains Kyomen, corresponding author and assistant professor of psychiatry at Boston University Chobanian and Avedisian School of Medicine and Medical Director of the Boston Medical Center-Brighton Geriatric Psychiatry Program.

 

The article synthesizes current guidance and real-world clinical experience to propose an operational framework – illustrated with composite case examples built from common patterns, not identifiable patients – rather than reporting results from a single AI tool trial.

 

The authors developed the framework by:

 

•           Reviewing recent national and international guidance on AI in health care.

•           Drawing on clinical experience in geriatric psychiatry and aging care.

•           Considering how AI is already being used with older adults (for example, tools that detect falls, screen for cognitive concerns, assist with documentation, or act as digital companions).

•           Translating this into a practical Humane Intelligence framework with four pillars and a Moral Grid Operational Index.

•           Working through realistic composite case examples to ensure the framework offers clear guidance in everyday situations.

 

Clinically, the framework emphasizes that a responsible human clinician remains accountable for high-stakes decisions, and that fully automated older-adult mental health care is discouraged. It encourages plain-language explanations so patients and caregivers know when AI is involved and what it can and cannot do. It also urges health systems to track outcomes that matter to older adults and families, including day-to-day function, emotional distress, caregiver burden, avoidable emergency visits or hospitalizations, and fairness across different groups.

 

“Our hope is that this work helps health systems augment care with AI in ways that make care for older adults kinder, safer, and more personal, not colder or more mechanical,” Kyomen said. “If we get this right, AI can support better decisions and earlier help while protecting trust, accountability, and the human bond at the center of care.”

 

Journal reference:

Kyomen HH (on behalf of the Group for the Advancement of Psychiatry Committee on Aging). “Humane Intelligence in Geropsychiatric Care: Relational Artificial Intelligence, Clinical Wisdom, and the Moral Grid Operational Index.” The American Journal of Geriatric Psychiatry (Article in Press; abstract available online ahead of print).

Abstract link:
https://www.ajgponline.org/article/S1064-7481(25)00567-6/abstract

DOI: 10.1016/j.jagp.2025.12.012

Full text available via ScienceDirect (subscription/institutional access may be required)

More details at: https://blogs.bu.edu/hhkyomen/humane-intelligence-aam/

 

Note: This clinical ethics and governance framework shares its name with a U.S. nonprofit called Humane Intelligence but is not affiliated with that organization.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.