image: The idea behind the breakthrough
Credit: Netta Kasher
BEER-SHEVA, Israel, January 18, 2026 – To ensure our bodies function correctly, the cells that compose them must operate properly. Imagine a cell as a bustling city where tiny parts called organelles move, reorganize, and respond to external stresses. To understand how our bodies stay healthy, or what goes wrong during disease, scientists need a way to "peek" inside the cell and observe this movement in real-time.
The primary tool for this is the light microscope, a technology dating back to the 17th century that allows us to turn the invisible cell into a world of information. However, because cells are largely transparent, standard microscopy is insufficient for understanding their internal workings.
The limitations of fluorescent staining
To solve the transparency problem, scientists use fluorescent staining, a "glowing marker" that labels specific organelles within the cell. This solution was borrowed from nature: a fluorescent protein derived from glowing jellyfish. This breakthrough was so significant it earned researchers the Nobel Prize in Chemistry, as it allowed us to "turn on flashlights" inside a living cell to see where each component is located.
Despite its success, this method carries a heavy price. It is difficult to stain many organelles simultaneously because the colors "mix" together. The intense light required to see the stains causes them to fade rapidly and can damage the cell's health, disrupt its behavior or even kill it. Attaching these dyes to organelles is like "adding small weights to a delicate machine," which may prevent the cell from functioning naturally.
The AI Revolution: "In Silico Labeling"
A partial solution emerged in the mid-20th century when physicist Frits Zernike developed a Nobel Prize-winning method to exploit how light rays bend and slow down as they pass through different parts of a cell. This is like looking at transparent glass in water; if light hits it at the right angle, its outlines become visible. These label-free imaging methods (imaging without external dyes) create artificial contrast, making some parts of the cell darker and others lighter.
In recent years, a revolution called “In Silico Labeling” (also known as "virtual staining") has emerged. Researchers have shown that Artificial Intelligence can be taught to "translate" label-free light microscope images into detailed, colorful fluorescent images. This means that the AI learns to identify subtle patterns of light and shadow in the transparent image and recognizes which organelles correspond to those patterns to predict what a cell would look like if it were stained. The result is a detailed, "colorized" movie of cell life while keeping the cell healthy and behaving exactly as nature intended.
The "Achilles' Heel" and the solution by BGU researchers
However, AI has an "Achilles' heel": it often tries to decipher the cell like a person trying to understand a single word without reading the entire sentence. For example, seeing the word "Apple" makes it hard to know if the reference is to a fruit or a tech company. But through context, such as "I took a bite of..." versus "I bought shares of...", the meaning becomes clear. Previous in silico labeling methods looked at individual pixels without understanding the "story" behind the cell.
Nitzan Almalem and Prof. Assaf Zaritsky from the Computational Cell Dynamics lab at the Institute for Interdisciplinary Computational Science of the Stein Faculty of Computer and Information Science at Ben-Gurion University of the Negev developed a computational solution to this problem. Instead of the computer just learning from the image of the cell, Nitzan taught it to use the context of the cell. Contextual information refers to metadata or environmental factors, such as cell shape, its neighbors, or its position in a colony, that help the AI "generalize" and understand the image better. By understanding this context, the computer accurately stained rare processes like cell division, which look very different from the norm and often cause other systems to fail.
The researchers’ findings were recently published in the prestigious journal Nature Methods.
Their research was supported by the Israel Science Foundation (Grant no. 2516/21), by BGU’s Data Science Research Center, and by an Allen Distinguished Investigator Award, a Paul G. Allen Frontiers Group advised grant of Allen Family Philanthropies.
The Future: A Language Model for Cells
This ability to decipher context is just the beginning. The lab plans to expand this "context" to include information such as cell type, microscope type, disease state, and even the drugs the cell has received.
The vision is to build an extensive "dictionary" that eventually becomes a complete foundation model (a "language model") of the cellular world. Such a system could understand complex biological "texts" from any microscope and any cell type, providing scientists with a vivid, accurate picture of life without ever disturbing it.
Journal
Nature Methods
Method of Research
Computational simulation/modeling
Subject of Research
Cells
Article Title
Cell context-dependent in silico organelle localization in label-free microscopy images
Article Publication Date
19-Dec-2025