Welcome to In the Spotlight, where each month we shine a light on something exciting, timely, or simply fascinating from the world of science.
This month, we’re focusing on artificial intelligence (AI), a topic that continues to capture attention everywhere. Here, you’ll find the latest research news, insights, and discoveries shaping how AI is being developed and used across the world.
Latest News Releases
Updates every hour. Last Updated: 30-Dec-2025 04:11 ET (30-Dec-2025 09:11 GMT/UTC)
Accelerating materials design with high-throughput experiments and data science
Japan Advanced Institute of Science and TechnologyHow AI might be narrowing our worldview and what regulators can do about it
The Hebrew University of JerusalemPeer-Reviewed Publication
New study highlights that generative AI systems—especially large language models like ChatGPT—tend to produce standardized, mainstream content, which can subtly narrow users’ worldviews and suppress diverse and nuanced perspectives. This isn't just a technical issue; it has real social consequences, from eroding cultural diversity to undermining collective memory and weakening democratic discourse. Existing AI governance frameworks, focused on principles like transparency or data security, don’t go far enough to address this “narrowing world” effect. To fill that gap, the article introduces “multiplicity” as a new principle for AI regulation, urging developers to design AI systems that expose users to a broader range of narratives, support diverse alternatives and encourage critical engagement so that AI can enrich, rather than limit, the human experience.
AI chatbots can run with medical misinformation, study finds, highlighting the need for stronger safeguards
The Mount Sinai Hospital / Mount Sinai School of MedicinePeer-Reviewed Publication
A new study by researchers at the Icahn School of Medicine at Mount Sinai finds that widely used AI chatbots are highly vulnerable to repeating and elaborating on false medical information, revealing a critical need for stronger safeguards before these tools can be trusted in health care. The researchers also demonstrated that a simple built-in warning prompt can meaningfully reduce that risk, offering a practical path forward as the technology rapidly evolves. Their findings were detailed in the August 2 online issue of Communications Medicine [https://doi.org/10.1038/s43856-025-01021-3].
- Journal
- Communications Medicine
- Funder
- NIH/National Center for Advancing Translational Sciences (NCATS), NIH/National Institutes of Health
Yonsei University researchers directly measure quantum metric tensor in real material
Yonsei UniversityPeer-Reviewed Publication
The quantum metric—a key measure of the quantum distance in solid-state materials—helps determine the electronic properties of solids, such as transport phenomena. While scientists have measured the quantum metric directly in artificial systems, its determination in solids has proven challenging. Recently, researchers from Yonsei University have obtained this quantity using photoemission measurements in black phosphorus, furthering theoretical as well as experimental quantum physics.
- Journal
- Science
Ph.D. researcher harnesses AI to transform skin cancer diagnosis in remote areas
Heriot-Watt University- Funder
- Engineering and Physical Sciences Research Council
Ensuring drug safety using AI models for adverse drug reaction prediction
Pensoft PublishersPeer-Reviewed Publication
An AI model was developed to predict adverse drug reactions based solely on chemical structure. The system could potentially support early-stage drug safety assessment before clinical trials. The study is published in the open-access journal Pharmacia.
- Journal
- Pharmacia