News Release

Federated machine learning enables the largest brain tumor study to-date, without sharing patient data

The largest and most diverse glioblastoma patient study ever conducted, focusing of tumor boundary detection without actually sharing private patient data, according to Penn Medicine researchers

Peer-Reviewed Publication

University of Pennsylvania School of Medicine

PHILADELPHIA— Researchers at Penn Medicine and Intel Corporation led the largest-to-date global machine learning effort to securely aggregate knowledge from brain scans of 6,314 glioblastoma (GBM) patients at 71 sites around the globe and develop a model that can enhance identification and prediction of boundaries in three tumor sub-compartments, without compromising patient privacy. Their findings were published today in Nature Communications[OKM1] .


“This is the single largest and most diverse dataset of glioblastoma patients ever considered in the literature, and was made possible through federated learning,” said senior author Spyridon Bakas, PhD, an assistant professor of Pathology & Laboratory Medicine, and Radiology, at the Perelman School of Medicine at the University of Pennsylvania. “The more data we can feed into machine learning models, the more accurate they become, which in turn can improve our ability to understand, treat, and remove glioblastoma in patients with more precision.”

Researchers studying rare conditions, like GBM, an aggressive type of brain tumor, often have patient populations limited to their own institution or geographical location. Due to privacy protection legislation, such as the Health Insurance Portability and Accountability Act of 1996 (HIPAA) in the United States, and General Data Protection Regulation (GDPR) in Europe, data sharing collaborations across institutions without compromising patient privacy data is a major obstacle for many healthcare providers.

A newer machine learning approach, called federated learning, offers a solution to these hurdles by bringing the machine learning algorithm to the data instead of following the current paradigm of centralizing data to the algorithms. Federated learning — an approach first implemented by Google for keyboards’ autocorrect functionality — trains a machine learning algorithm across multiple decentralized devices or servers (in this case, institutions) holding local data samples, without actually exchanging them. It has been previously shown to allow clinicians at institutions in different countries to collaborate on research without sharing any private patient data.

Bakas led this massive collaborative study along with first authors Sarthak Pati, MS, a senior software developer at Penn’s Center for Biomedical Image Computing & Analytics (CBICA), Ujjwal Baid, PhD, a postdoctoral researcher at CBICA, Brandon Edwards, PhD, a research scientist at Intel Labs, and Micah Sheller, a research scientist at Intel Labs.

“Data helps to drive discovery, especially in rare cancers where available data can be scarce. The federated approach we outline allows for access to maximal data while lowering institutional burdens to data sharing.” said Jill Barnholtz-Sloan, PhD, adjunct Professor at Case Western Reserve University School of Medicine.

The model followed a staged approach. The first stage, called a public initial model, was pre-trained using publicly available data from the International Brain Tumor Segmentation (BraTS) challenge. The model was tasked with identifying boundaries of three GBM tumor sub-compartments: “enhancing tumor” (ET), representing the vascular blood-brain barrier breakdown within the tumor; the “tumor core” (TC), which includes the ET and the part which kills tissue, and represents the part of the tumor relevant for surgeons who remove them; and the “whole tumor” (WT), which is defined by the union of the TC and the infiltrated tissue, which is the whole area that would be treated with radiation.

This first the data of 231 patient cases from 16 sites, and the resulting model was validated against the local data at each site. The second stage, called the preliminary consensus model, used the public initial model and incorporated more data from 2,471 patient cases from 35 sites, which improved its accuracy. The final stage, or final consensus model, used the updated model, and incorporated the largest amount of data from 6,314 patient cases (3,914,680 images) at 71 sites, across 6 continents, to further optimize and test for generalizability to unseen data.

As a control for each step, researchers excluded 20 percent of the total cases contributed by each participating site from the model training process and used as “local validation data.” This allowed them to gauge the accuracy of the collaborative method. To further evaluate the generalizability of the models, six sites were not involved in any of the training stages to represent a completely unseen “out-of-sample” data population of 590 cases. Notably, the site at the American College of Radiology validated their model using data from a national clinical trial study.

Following model training the final consensus model garnered significant performance improvements against the collaborators’ local validation data. The final consensus model had an improvement of 27% in detecting ET boundaries, 33% in detecting TC boundaries, and 16% for WT boundary detection. The improved result is a clear indication of the benefit that can be afforded through access to more cases, not only to improve the model, but also to validate it.

Looking ahead, the authors hope that due to the generic methodology of federated learning, its applications in medical research can be far-reaching, applying not only to other cancers, but other conditions, like neurodegeneration, and beyond. They also anticipate more research to demonstrate that federated learning can abide by security and privacy protocols around the world.

Funding for this research was provided by the National Institutes of Health (U01CA242871, R01NS042645, U24CA189523, U24CA215109, U01CA248226, P30CA510081231, R50CA211270, UL1TR001433, R21EB0302091232, R37CA214955, R01CA233888, U10CA21661, U10CA37422, U10CA180820,1235U10CA180794, U01CA176110, R01CA082500, CA079778, CA080098, CA180794, CA180820,1236CA180822, CA180868), and the National Science Foundation (2040532, 2040462).

Intel Corporation provided software engineer staff and privacy-protecting expertise to the project, during the development of the utilized software.


Penn Medicine is one of the world’s leading academic medical centers, dedicated to the related missions of medical education, biomedical research, and excellence in patient care. Penn Medicine consists of the Raymond and Ruth Perelman School of Medicine at the University of Pennsylvania (founded in 1765 as the nation’s first medical school) and the University of Pennsylvania Health System, which together form a $9.9 billion enterprise.

The Perelman School of Medicine has been ranked among the top medical schools in the United States for more than 20 years, according to U.S. News & World Report's survey of research-oriented medical schools. The School is consistently among the nation's top recipients of funding from the National Institutes of Health, with $546 million awarded in the 2021 fiscal year.

The University of Pennsylvania Health System’s patient care facilities include: the Hospital of the University of Pennsylvania and Penn Presbyterian Medical Center—which are recognized as one of the nation’s top “Honor Roll” hospitals by U.S. News & World Report—Chester County Hospital; Lancaster General Health; Penn Medicine Princeton Health; and Pennsylvania Hospital, the nation’s first hospital, founded in 1751. Additional facilities and enterprises include Good Shepherd Penn Partners, Penn Medicine at Home, Lancaster Behavioral Health Hospital, and Princeton House Behavioral Health, among others.

Penn Medicine is powered by a talented and dedicated workforce of more than 52,000 people. The organization also has alliances with top community health systems across both Southeastern Pennsylvania and Southern New Jersey, creating more options for patients no matter where they live.

Penn Medicine is committed to improving lives and health through a variety of community-based programs and activities. In fiscal year 2021, Penn Medicine provided more than $619 million to benefit our community.



Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.