News Release

AI can tell if you a therapy session will be effective

New tool automates evaluations of cognitive behavioral therapy

Peer-Reviewed Publication

University of Southern California

Cognitive behavioral therapy (CBT) is one of the most common types of talk therapy in the United States. There are 11 criteria that cognitive behavioral therapists-in-training are normally judged on. What if their skills could be evaluated and improved with feedback from AI? This is the crux of new research from the USC Viterbi School of Engineering in conjunction with the University of Pennsylvania and the University of Washington. It’s the first study of CBT sessions done with real people in real, therapeutic conversations. The findings were recently published in PLOSOne

Over 1,100 real conversations between therapists-in-training and patients were analyzed by an AI created by the Signal Analysis and Interpretation Laboratory (SAIL) at the University of Southern California Viterbi School of Engineering. The challenge for AI, says lead author Nikolaos Flemotomos, a PhD student in electrical engineering in USC, is understanding multiple speakers and making meaning from just the text of a conversation.  For therapists who are apprenticing, human raters will normally evaluate their sessions. An AI was able match what a human evaluator could achieve with 73 percent accuracy. 

AI could judge the therapist’s interpersonal skills and discern if the therapists created the right structure for the session (if they addressed a patient’s assigned homework, for example). In addition, the AI could tell if a therapist was appropriately focused on the patient versus sharing too much of their own story and whether they were able to collaborate with their patient and establish rapport. All those aspects are taken into consideration in order to generate a single aggregate quality metric.

The AI only evaluated language patterns through automatically generated text transcriptions, not the tonal quality of the speakers during the sessions. The challenge of evaluating such sessions implies Flemotomos, is that making meaning and evaluating these conversations as well as the protocol affiliated with CBT is particularly challenging given the potential range of language choices, and errors in automatic transcription. 

Such evaluations, normally done by humans are necessary for training and providing performance-based feedback to a therapist, leading to improved clinical outcomes. The goal, say the researchers, is to automatically generate metrics from a recorded session for facilitating these applications.  

The researchers state, “…our goal is not to replace human supervision, but rather augment the supervisor’s efficiency and additionally offer a tool for self-assessment.”

With this tool, the process could be scaled to deal with the increasing demand for mental health services with trained professionals. 

For continuous improvement in the field says Flemotomos, “We would like to see these adopted in real world clinics.”

The next step is to add tonal or what is called prosodic, qualities of spoken interactions to this tool for enriching its capability. 

Flemotomos spoke about the personal appeal of doing such work, “Directly helping Humans through technology, instead of solely dealing with the technical aspects of an algorithm, is really rewarding.”

In addition to Flemotomos, Victor Martinez and Zhuohao Chen, PhD students at the University of Southern California Signal Analysis and Interpretation Laboratory contributed to the development of the AI tools and software, all under the guidance of senior author on the study Shrikanth Narayanan, USC University Professor and Nikias Chair in Engineering, in collaboration with University of Pennsylvania Assistant Professor Torrey Creed and University of Washington Research Professor David Atkins.

###


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.