New deep-learning tool can tell if your salmon is wild or farmed
Peer-Reviewed Publication
This month, we’re focusing on artificial intelligence (AI), a topic that continues to capture attention everywhere. Here, you’ll find the latest research news, insights, and discoveries shaping how AI is being developed and used across the world.
Updates every hour. Last Updated: 13-May-2026 14:15 ET (13-May-2026 18:15 GMT/UTC)
A new paper in Biology Methods and Protocols finds that we can now distinguish wild from farmed salmon using deep learning, potentially greatly improving strategies for environmental protection.
In an exciting exploration of environmental sustainability, researchers at Zhaoqing University, China, have uncovered groundbreaking insights into the carbon dynamics of waterlogged pond fields. Led by Dr. Guodong Yuan from the Guangdong Provincial Key Laboratory of Eco-Environmental Studies and Low-Carbon Agriculture in Peri-Urban Areas and the Guangdong Technology and Equipment Research Center for Soil and Water Pollution Control, this study, titled "Unveiling Carbon Dynamics in Year-Round Waterlogged Pond Fields: Insights into Soil Organic Carbon Accumulation and Sustainable Management," offers a fresh perspective on how these unique ecosystems can contribute to carbon sequestration and sustainable land management.
Researchers have demonstrated a new approach to building quantum convolutional neural networks (QCNNs) using photonic circuits, paving the way for more efficient quantum machine learning. The method, reported in Advanced Photonics, introduces an adaptive step called “state injection,” allowing the circuit to adjust its behavior based on real-time measurements. Using single photons and integrated quantum photonic processors, the team achieved over 92 percent classification accuracy on simple image patterns, closely matching theoretical predictions. This proof-of-concept shows that QCNNs can be implemented with existing photonic technology and highlights a path toward scalable quantum processors for future applications in AI and data processing.
MIT researchers find that large language models sometimes mistakenly link certain grammatical sequences to specific topics, and then rely on these learned patterns when answering queries. This phenomenon can cause LLMs to fail unexpectedly on new tasks and could be exploited by adversarial agents to trick an LLM into generating harmful content.