Most AI bots lack basic safety disclosures, study finds
Reports and Proceedings
This month, we’re focusing on artificial intelligence (AI), a topic that continues to capture attention everywhere. Here, you’ll find the latest research news, insights, and discoveries shaping how AI is being developed and used across the world.
Updates every hour. Last Updated: 9-May-2026 06:16 ET (9-May-2026 10:16 GMT/UTC)
The AI Agent Index investigates 30 top AI bots and finds just four have published formal safety and evaluation documents relating to the actual agents. The new wave of AI web browser agents, many designed to mimic human browsing, have the highest rates of missing safety information.
Babies with an increased likelihood of autism may struggle to settle into deep, restorative sleep, according to a new study from the University of East Anglia. Researchers studied the link between sleep and sensory sensitivity, which is common in neurodivergent infants. They found that when babies with this trait napped in a noisy environment, their deep sleep was considerably disrupted.But even in a quiet room, those with high sensory sensitivity still slept more lightly - suggesting that both their unique sensory wiring and their surroundings influence how well they rest.
A new method can test whether a large language model contains hidden biases, personalities, moods, or other abstract concepts. MIT researchers can zero in on connections within a model that encode for a concept of interest, to improve LLM safety and performance.
Prometheus Grand Challenge aims to deploy commercial-scale nuclear reactors in years, not decades.
IDAHO FALLS, Idaho —The Idaho National Laboratory and NVIDIA have partnered to advance nuclear energy deployment through artificial intelligence. The collaboration aims to accelerate advanced nuclear reactor deployment and reduce costs.
Researchers at Arc Institute developed MULTI-evolve, an AI-guided framework that compresses protein engineering from months of iterative experimentation into weeks using as few as 200 strategic lab measurements. By training neural networks on pairwise combinations of beneficial mutations, the approach learns the rules of how mutations interact and accurately predicts complex multi-mutant proteins, achieving up to 256-fold activity improvements in a single experimental round. Published in Science, the framework and open-source tools are applicable to enzymes, genome editors, and therapeutic proteins.
A team of researchers has found a way to steer the output of large language models by manipulating specific concepts inside these models. The new method could lead to more reliable, more efficient, and less computationally expensive training of LLMs. But it also exposes potential vulnerabilities. The researchers present their findings in the Feb. 19, 2026, issue of the journal Science.