FAU Engineering secures NIH grant to explore how the brain learns to ‘see’
Grant and Award Announcement
This month, we’re focusing on artificial intelligence (AI), a topic that continues to capture attention everywhere. Here, you’ll find the latest research news, insights, and discoveries shaping how AI is being developed and used across the world.
Updates every hour. Last Updated: 31-Dec-2025 20:11 ET (1-Jan-2026 01:11 GMT/UTC)
More than 12 million Americans experience visual impairments that limit independence and quality of life. This National Eye Institute grant will fund research on enhancing visual perceptual learning – the brain’s ability to detect subtle visual differences – by leveraging attention mechanisms to generalize improvements across the visual field. Using computational modeling, brain imaging, and neurochemical analysis, the project aims to advance vision rehabilitation, optimize training in visually demanding professions, and inform AI systems that learn and adapt like the human brain.
Security researchers have developed the first functional defense mechanism capable of protecting against “cryptanalytic” attacks used to “steal” the model parameters that define how an AI system works.
ITU’s Facts and Figures 2025 shows steady progress in connectivity, while highlighting gaps in quality and affordability
Mice pups conceived with in vitro fertilization (IVF) in the lab have slightly increased rates of DNA errors, or mutations, compared to pups conceived naturally, a new study on artificial reproductive technologies suggests.
A brief period of blindness right after birth can have lasting effects on how the brain processes visual detail—but surprisingly, not on how it recognizes faces, objects, or words. The team studied adults who were born with dense cataracts that left them blind for the first weeks or months of life, but who later had their sight restored. Using brain imaging, they compared these “cataract-reversal” participants with people who had normal vision since birth. The scans revealed a striking contrast: the primary visual cortex, responsible for fine details and contrast, showed permanent changes caused by early blindness. Yet the ventral occipito-temporal cortex (VOTC)—a region crucial for recognizing categories like faces or words—appeared to function normally. These findings challenge the idea that all visual regions must develop during a single critical period, some areas are more sensitive early on, but others can adapt later through experience.
The iMetaMed framework illustrates the integrative vision for medicine by dissolving disciplinary boundaries. Four major modules are highlighted: (1) Molecular & Computational Frontiers, represented by AlphaFold3 protein structure prediction (precision diagnostics), GeneCompass federated learning, and single-cell transcriptome integration; (2) AI-Enabled Clinical Translation, including AI-driven drug discovery, virtual cell modeling, and generative virtual staining; (3) Data Science & Infrastructure, featuring big data methodologies, dual-axis slicing, semantic dictionaries, and accelerated Biobank data extraction; and (4) Health Systems & Public Impact, encompassing telemedicine applications, open science, transparent peer review, multilingual dissemination, and diversity-oriented equity frameworks. At the core, iMetaMed envisions a seamless continuum from molecules to clinical practice, population health, and policy—transforming information abundance into actionable breakthroughs for global health.
Breast cancer is a highly heterogeneous malignancy among women worldwide. Traditional prognostic models relying solely on clinicopathological features offer limited predictive accuracy and lack molecular-level insights. Unlike such conventional approaches, this study integrates proteomic and clinical data within an interpretable deep learning framework to improve prognostic precision and biological interpretability. We aimed to develop a more reliable model to accurately predict the 5-year survival status of patients with breast cancer using multi-omics data. The model integrating proteomics and clinical features demonstrated superior performance (AUC = 0.8136) compared to other feature combination models. The optimized model with 13 key features (4 clinical features and 9 proteins) achieved an AUC of 0.864 with the precision of 0.970, the recall of 0.810, and F1-score of 0.883. SHapley Additive exPlanations analysis identified MPHOSPH10, EGFR, ARL3, KRT18, lymph node status, and HER2 status as the most influential features, while Kolmogorov–Arnold Network analysis provided explicit mathematical relationships between key contributors and prediction outcomes. Collectively, our interpretable multi-modal model demonstrates robust performance in predicting 5-year survival in breast cancer patients and offers mechanistic insights, thereby enhancing its potential for clinical translation through the development of an accessible prediction tool.