image: Are you eligible for a clinical trial? ChatGPT can find out
Credit: IOP Publishing
A new study in the academic journal Machine Learning: Health discovers that ChatGPT can accelerate patient screening for clinical trials, showing promise in reducing delays and improving trial success rates.
Researchers at UT Southwestern Medical Centre used ChatGPT to assess whether patients were eligible to take part in clinical trials and were able to identify suitable candidates within minutes.
Clinical trials, which test new medications and procedures on the public, are vital for developing and validating new treatments. But many trials struggle to enrol enough participants. According to a recent study, up to 20% of National Cancer Institute (NCI)-affiliated trials fail due to low enrolment. This not only inflates costs and delays results, but also undermines the reliability of new treatments.
Currently, screening patients for trials is a manual process. Researchers must review each patient’s medical records to determine if they meet eligibility criteria, which takes around 40 minutes per patient. With limited staff and resources, this process is often too slow to keep up with demand.
Part of the problem is that valuable patient information contained in electronic health records (EHRs) is often buried in unstructured text, such as doctors’ notes, which traditional machine learning software struggles to decipher. As a result, many eligible patients are overlooked because there simply isn’t enough capacity to review every case. This contributes to low enrolment rates, trial delays and even cancellations, ultimately slowing down access to new therapies.
To counter this problem, the researchers have looked at ways of speeding up the screening process by using ChatGPT. Researchers used GPT-3.5 and GPT-4 to analyse 74 patients’ data to see if they qualified for a head and neck cancer trial.
Three ways of prompting the AI were tested:
- Structured Output (SO): asking for answers in a set format.
- Chain of Thought (CoT): asking the model to explain its reasoning.
- Self-Discover (SD): letting the model figure out what to look for.
The results were promising. GPT-4 was more accurate than GPT-3.5, though slightly slower and more expensive. Screening times ranged from 1.4 to 12.4 minutes per patient, with costs between $0.02 and $0.27.
“LLMs like GPT-4 can help screen patients for clinical trials, especially when using flexible criteria,” said Dr. Mike Dohopolski, lead author of the study. “They’re not perfect, especially when all rules must be met, but they can save time and support human reviewers.”
This research highlights the potential for AI to support faster, more efficient clinical trials - bringing new treatments to patients sooner.
The study is one of the first articles published in IOP Publishing's Machine Learning series™, the world’s first open access journal series dedicated to the application and development of machine learning (ML) and artificial intelligence (AI) for the sciences.
The same research team have worked on a method that allows surgeons to adjust patients’ radiation therapy in real time whilst they are still on the table. Using a deep learning system called GeoDL, the AI delivers precise 3D dose estimates from CT scans and treatment data in just 35 milliseconds. This could make adaptive radiotherapy faster and more efficient in real clinical settings.
ENDS
About IOP Publishing
IOP Publishing is a society-owned scientific publisher, delivering impact, recognition and value to the scientific community. Its purpose is to expand the world of physics, offering a portfolio of journals, ebooks, conference proceedings and science news resources globally. IOPP is a member of Purpose-Led Publishing, a coalition of society publishers who pledge to put purpose above profit.
As a wholly owned subsidiary of the Institute of Physics, a not-for-profit society, IOP Publishing supports the Institute’s work to inspire people to develop their knowledge, understanding and enjoyment of physics. Visit ioppublishing.org to learn more.
About the lead author
Dr. Dohopolski is a physician-scientist in radiation oncology at UT Southwestern with experience integrating artificial intelligence (AI) into clinical practice, notably developing advanced predictive models and pioneering the use of large language models (LLMs) for clinical trial screening. He received an engineering degree from Notre Dame and a medical degree from the University of Pittsburgh School of Medicine. During residency at UT Southwestern Medical Center, he completed the competitive Holman Pathway, dedicating 21 months exclusively to AI-focused research under mentors Drs. Jing Wang and Steve Jiang from the Medical Artificial Intelligence and Automation (MAIA) lab. This currently work has now been implemented in a prospective manner across several studies across the university.
Method of Research
Computational simulation/modeling
Subject of Research
Not applicable
Article Title
ChatGPT Augmented Clinical Trial Screening
Article Publication Date
31-Jul-2025
COI Statement
The authors state that they do not have any conflicts of interest.