News Release

New AI tool makes medical imaging process 90% more efficient

Rice approach sets standard for brain and other medical imaging

Peer-Reviewed Publication

Rice University

Kushal Vyas

image: 

Kushal Vyas is an electrical and computer engineering doctoral student at Rice University and first author on a paper presented at the Medical Image Computing and Computer Assisted Intervention Society, or MICCAI. (Photo by Jeff Fitlow/Rice University)

view more 

Credit: Photo by Jeff Fitlow/Rice University

HOUSTON – (Oct. 14, 2025) – When doctors analyze a medical scan of an organ or area in the body, each part of the image has to be assigned an anatomical label. If the brain is under scrutiny for instance, its different parts have to be labeled as such, pixel by pixel: cerebral cortex, brain stem, cerebellum, etc. The process, called medical image segmentation, guides diagnosis, surgery planning and research.

In the days before artificial intelligence (AI) and machine learning (ML), clinicians performed this crucial yet painstaking and time-consuming task by hand, but over the past decade, U-nets ⎯ a type of AI architecture specifically designed for medical image segmentation ⎯ have been the go-to instead. However, U-nets require large amounts of data and resources to be trained.

“For large and/or 3D images, these demands are costly,” said Kushal Vyas, a Rice electrical and computer engineering doctoral student and first author on a paper presented at the Medical Image Computing and Computer Assisted Intervention Society, or MICCAI, the leading conference in the field. “In this study, we proposed MetaSeg, a completely new way of performing image segmentation.”

In experiments using 2D and 3D brain magnetic resonance imaging (MRI) data, MetaSeg was shown to achieve the same segmentation performance as U-Nets while needing 90% fewer parameters ⎯ the key variables AI/ML models derive from training data and use to identify patterns and make predictions.

The study, titled “Fit Pixels, Get Labels: Meta-learned Implicit Networks for Image Segmentation,” won the best paper award at MICCAI, getting recognized from a pool of over 1,000 accepted submissions.

“Instead of U-Nets, MetaSeg leverages implicit neural representations ⎯ a neural network framework that has hitherto not been thought useful or explored for image segmentation,” Vyas said.

An implicit neural representation (INR) is an AI network that interprets a medical image as a mathematical formula that accounts for the signal value (color, brightness, etc.) of each and every pixel in a 2D image or every voxel in a 3D one.

While INRs offer a very detailed yet compact way to represent information, they are also highly specific, meaning they typically only work well for the single signal/image they trained on: An INR trained on a brain MRI cannot typically generalize rules about what different parts of the brain look like, so if provided with an image of a different brain, the INR would typically falter.

“INRs have been used in the computer vision and medical imaging communities for tasks such as 3D scene reconstruction and signal compression, which only require modeling one signal at a time,” Vyas said. “However, it was not obvious before MetaSeg how to use them for tasks such as segmentation, which require learning patterns over many signals.”

To make it useful for medical image segmentation, the researchers taught INRs to predict both the signal values and the specific segmentation labels for a given image. To do so, they used meta-learning, an AI training strategy whose literal translation is “learning to learn” that helps models rapidly adapt to new information.

“We prime the INR model parameters in such a way so that they are further optimized on an unseen image at test time, which enables the model to decode the image features into accurate labels,” Vyas said.

This special training allows the INRs to not only quickly adjust themselves to match the pixels or voxels of a previously unseen medical image but to then also decode its labels, instantly predicting where the outlines for different anatomical regions should go.

“MetaSeg offers a fresh, scalable perspective to the field of medical image segmentation that has been dominated for a decade by U-Nets,” said Guha Balakrishnan, assistant professor of electrical and computer engineering at Rice and a member of the university’s Ken Kennedy Institute. “Our research results promise to make medical image segmentation far more cost-effective while delivering top performance.”

Balakrishnan, the corresponding author on the study, is part of a thriving ecosystem of Rice researchers at the forefront of digital health innovation, which includes the Digital Health Initiative and the joint Rice-Houston Methodist Digital Health Institute. Ashok Veeraraghavan, chair of the Department of Electrical and Computer Engineering and professor of electrical and computer engineering and computer science at Rice, is also an author on the study.

While MetaSeg can be applied to a range of imaging contexts, its demonstrated potential to enhance brain imaging illustrates the kind of research Proposition 14 ⎯ on the ballot in Texas Nov. 4 ⎯ could help expand statewide.

The research was supported by the U.S. National Institutes of Health (R01DE032051), the Advanced Research Projects Agency for Health (D24AC00296) and the National Science Foundation (2107313, 1648449). The content herein is solely the responsibility of the authors and does not necessarily represent the official views of the funding organizations and institutions.


-30-

This news release can be found online at news.rice.edu.

Follow Rice News and Media Relations via Twitter @RiceUNews.

Peer-reviewed paper:

Fit Pixels, Get Labels: Meta-learned Implicit Networks for Image Segmentation | The Medical Image Computing and Computer Assisted Intervention Society - MICCAI 2025 | DOI: 10.1007/978-3-032-04947-6_19

Authors: Kushal Vyas, Ashok Veeraraghavan, Guha Balakrishnan

https://doi.org/10.1007/978-3-032-04947-6_19

Access associated media files:

https://rice.box.com/s/po3ew9sf4mpgxfhdh2i2k0t7wd0vp2ke
(Photos by Jeff Fitlow/Rice University)


About Rice:

Located on a 300-acre forested campus in Houston, Texas, Rice University is consistently ranked among the nation’s top 20 universities by U.S. News & World Report. Rice has highly respected schools of architecture, business, continuing studies, engineering and computing, humanities, music, natural sciences and social sciences and is home to the Baker Institute for Public Policy. Internationally, the university maintains the Rice Global Paris Center, a hub for innovative collaboration, research and inspired teaching located in the heart of Paris. With 4,776 undergraduates and 4,104 graduate students, Rice’s undergraduate student-to-faculty ratio is just under 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for lots of race/class interaction and No. 7 for best-run colleges by the Princeton Review. Rice is also rated as a best value among private universities by the Wall Street Journal and is included on Forbes’ exclusive list of “New Ivies.”


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.