video: In an increasingly acute surgeon shortage, artificial intelligence could help fill the gap, coaching medical students as they practice surgical techniques. A new tool, developed at Johns Hopkins University and trained on videos of expert surgeons at work, offers students real-time personalized advice as they practice suturing. Initial trials suggest AI can be a powerful substitute teacher for more experienced students.
Credit: Johns Hopkins University
In an increasingly acute surgeon shortage, artificial intelligence could help fill the gap, coaching medical students as they practice surgical techniques.
A new tool, trained on videos of expert surgeons at work, offers students real-time personalized advice as they practice suturing. Initial trials suggest AI can be a powerful substitute teacher for more experienced students.
“We’re at a pivotal time. The provider shortage is ever increasing and we need to find new ways to provide more and better opportunities for practice. Right now, an attending surgeon who already is short on time needs to come in and watch students practice, and rate them, and give them detailed feedback—that just doesn’t scale,” said senior author Mathias Unberath, an expert in AI assisted medicine who focuses on how people interact with AI. “The next best thing might be our explainable AI that shows students how their work deviates from expert surgeons.”
Developed at Johns Hopkins University, the pioneering technology was showcased and honored at the recent International Conference on Medical Image Computing and Computer Assisted Intervention.
Currently many medical students watch videos of experts performing surgery and try to imitate what they see. There are even existing AI models that will rate students, but according to Unberath they fall short because they don’t tell students what they’re doing right or wrong.
“These models can tell you if you have high or low skill, but they struggle with telling you why,” he said. “If we want to enable meaningful self-training, we need to help learners understand what they need to focus on and why.”
The team’s model incorporates what’s known as “explainable AI,” an approach to AI that – in this example – will rate how well a student closes a wound and then also tell them precisely how to improve.
The team trained their model by tracking the hand movements of expert surgeons as they closed incisions. When students try the same task, the AI texts them immediately to tell them how they compared to an expert and how to refine their technique.
“Learners want someone to tell them objectively how they did,” said first author Catalina Gomez, a Johns Hopkins PhD student in computer science. “We can calculate their performance before and after the intervention and see if they are moving closer to expert practice.”
The team performed a first-of-its-kind study to see if students learned better from the AI or by watching videos. They randomly assigned 12 medical students with suturing experience to train with one of the two methods.
All participants practiced closing an incision with stitches. Some got immediate AI feedback while others tried to compare what they did to a surgeon in a video. Then everyone tried suturing again.
Compared to students who watched videos, some students coached by AI, those with more experience, learned much faster.
“In some individuals the AI feedback has a big effect,” Unberath said. “Beginner students still struggled with the task but students with a solid foundation in surgery, who are at the point where they can incorporate the advice, it had a great impact.”
Next the team plans to refine the model to make it easier to use. They hope to eventually create a version that students could use at home.
“We’d like to offer computer vision and AI technology that allows someone to practice in the comfort of their home with a suturing kit and a smart phone,” Unberath said. “This will help us scale up training in the medical fields. It’s really about how can we use this technology to solve problems.”
Authors include Lalithkumar Seenivasan, Xinrui Zou; Jeewoo Yoon; Sirui Chu; Ariel Leon; Patrick Kramer; Yu-Chun Ku; Jose L. Porras; and Masaru Ishii, all of Johns Hopkins, and Alejandro Martin-Gomez of University of Arkansas.