World-first safety guide for public use of AI health chatbots
Peer-Reviewed Publication
This month, we’re focusing on artificial intelligence (AI), a topic that continues to capture attention everywhere. Here, you’ll find the latest research news, insights, and discoveries shaping how AI is being developed and used across the world.
Updates every hour. Last Updated: 27-Apr-2026 00:15 ET (27-Apr-2026 04:15 GMT/UTC)
As members of the public increasingly turn to AI with health concerns, University of Birmingham researchers are leading a global programme to build the first definitive guide for safely navigating health information on AI powered chatbots, published in Nature Health.
In a student-centered teaching project, undergraduates analyzed botanical elements in Taylor Swift music videos to activate prior knowledge and reinforce complex plant science concepts. The experience reports improved comprehension, high student satisfaction, and potential to counter “plant blindness” through popular culture.
Researchers from several partner institutions of the German Center for Diabetes Research (DZD) have collaborated with international colleagues to develop a new approach for visualizing subtle tissue changes in the pancreas in type 2 diabetes. The results provide new insights into the development of type 2 diabetes. The study has now been published in Nature Communications.
A research team from Merck (China) has proposed Jigsaw-LightRAG, a novel method that improves how artificial intelligence systems manage and update knowledge graphs. It decomposes a global knowledge graph (KG) into per-document subgraphs and leverages document lifecycle states (New, Modified, Persistent, Deleted) so that LLM-based extraction is invoked only for changed documents. The updated global KG is then reconstructed via code-level aggregation and strict string-matching deduplication without additional LLM token consumption. Experiments across public benchmarks (LongBench, PubMedQA, and QASPER) demonstrate that in localized corpus changing scenarios, LLM token consumption drops by orders of magnitude (with DELETE operations requiring 0 tokens), while KG structural integrity remains stable and question-answering performance stays on par with full-rebuild baselines.
Korean researchers have achieved a breakthrough in the safety of generative AI. They developed a vision language model optimized for safety and released it for the first time. In this model, AI can preemptively analyze both images and text, even detecting risks. The research team is paving the way for a safe AI era. ETRI announced that it has unveiled a new type of vision language model called “Safe LLaVA,” which structurally enhances safety in generative AI models.