New method advances reliability of AI with applications in medical diagnostics
Peer-Reviewed Publication
This month, we’re focusing on artificial intelligence (AI), a topic that continues to capture attention everywhere. Here, you’ll find the latest research news, insights, and discoveries shaping how AI is being developed and used across the world.
Updates every hour. Last Updated: 31-Dec-2025 05:11 ET (31-Dec-2025 10:11 GMT/UTC)
MIT researchers used sparse autoencoders to shed light on the inner workings of protein language models, an advance that could streamline the process of identifying new drugs or vaccine targets.
When it comes to adopting artificial intelligence in high-stakes settings like hospitals and airplanes, good AI performance and a brief worker training on the technology is not sufficient to ensure systems will run smoothly and patients and passengers will be safe, a new study suggests. Instead, algorithms and the people who use them in the most safety-critical organizations must be evaluated simultaneously to get an accurate view of AI’s effects on human decision making, researchers say. The team also contends these evaluations should assess how people respond to good, mediocre and poor technology performance to put the AI-human interaction to a meaningful test – and to expose the level of risk linked to mistakes.
Graph-structured data are pervasive in the real-world such as social networks, molecular graphs and transaction networks. Graph neural networks (GNNs) have achieved great success in representation learning on graphs, facilitating various downstream tasks. However, GNNs have several drawbacks such as lacking interpretability, can easily inherit the bias of data and cannot model casual relations. Recently, counterfactual learning on graphs has shown promising results in alleviating these drawbacks. Various approaches have been proposed for counterfactual fairness, explainability, link prediction and other applications on graphs. To facilitate the development of this promising direction, in this survey, researchers categorize and comprehensively review papers on graph counterfactual learning. Researchers divide existing methods into four categories based on problems studied. For each category, they provide background and motivating examples, a general framework summarizing existing works and a detailed review of these works. Researchers point out promising future research directions at the intersection of graph-structured data, counterfactual learning, and real-world applications. To offer a comprehensive view of resources for future studies, researchers compile a collection of open-source implementations, public datasets, and commonly-used evaluation metrics. This survey aims to serve as a “one-stop-shop” for building a unified understanding of graph counterfactual learning categories and current resources.