Teaching models to cope with messy medical data
Peer-Reviewed Publication
This month, we’re focusing on artificial intelligence (AI), a topic that continues to capture attention everywhere. Here, you’ll find the latest research news, insights, and discoveries shaping how AI is being developed and used across the world.
Updates every hour. Last Updated: 9-May-2026 14:16 ET (9-May-2026 18:16 GMT/UTC)
When labelled scans are scarce and hospitals collect images in different ways, a new training recipe developed by SUTD researchers helps segmentation AI keep its bearings across domains without needing more annotations.
Kyoto, Japan -- Predicting earthquakes has long been an unattainable fantasy. Factors like odd animal behaviors that have historically been thought to forebode earthquakes are not supported by empirical evidence. As these factors often occur independently of earthquakes and vice versa, seismologists believe that earthquakes occur with little or no warning. At least, that's how it appears from the surface.
Earthquake-generating zones lie deep within the Earth's crust and thus cannot be directly observed, but scientists have long proposed that faults may undergo a precursory phase before an earthquake during which micro-fracturing and slow slip occur. Yet, despite their obvious potential, exactly how these processes could enable prediction of a main shock remains unclear. Furthermore, observational studies have suggested that small and large earthquakes appear indistinguishable during the beginning of their rupture, raising doubts about the usefulness of short-term precursors.
These difficulties have prompted interest in the use of machine learning to search for potentially predictive fault signals. Machine learning models have demonstrated an ability to predict stick-slip laboratory earthquakes in small, centimeter-scale experiments, but this approach has not yet been applied to larger, more complex systems that more closely mimic natural faults.
MIT researchers developed a technique that enables LLMs to permanently absorb new knowledge by generating study sheets based on data the model uses to memorize important information.
The Context-Guided Segmentation Network (CGS-Net) developed by University of Maine researchers introduces a deep learning architecture designed to interpret microscopic images of tissue with greater precision than conventional AI models. Powered by a dual-encoder model that mirrors the workflow of a pathologist examining a slide, one branch of the network processes a high-resolution image patch to capture cell-level details, while the other examines a lower-resolution patch encompassing the surrounding tissue. A system of interconnected encoders and decoders uses data from both the high and low resolution images for a complete analysis.
A new heart monitoring system combining 3D printing and artificial intelligence could transform the way doctors measure and diagnose patients' heart health.
Developed at SFU’s School of Mechatronic Systems Engineering, the system features reusable dry 3D-printed electrodes embedded in a soft chest belt – the folding origami-shaped design uses gentle suction to stick to the skin.
Carbon-based ink printed on the suction cup replaces electrolyte gel, conducting the heart’s electrical signals through to a wearable device with built-in AI software to pre-diagnose of up to 10 types of arrhythmias, or irregular heart rhythms.