AI data processing meets privacy at the Josep Carreras Institute
Business Announcement
This month, we’re focusing on artificial intelligence (AI), a topic that continues to capture attention everywhere. Here, you’ll find the latest research news, insights, and discoveries shaping how AI is being developed and used across the world.
Updates every hour. Last Updated: 26-Dec-2025 21:11 ET (27-Dec-2025 02:11 GMT/UTC)
The SECURED project aims at generating libraries and machine learning tools to foster innovation in the fight against blood cancers while preserving the highest privacy standards for sensitive patient data. Dr Eduard Porta, head of the Cancer Immunogenomics team at the Josep Carreras Leukaemia Research Institute is part of this Horizon Europe-funded collaboration that will bring the most sophisticated technologies into the real world.
Hydrogen, with its carbon-free composition and the availability of abundant renewable energy sources for its production, holds significant promise as a fuel for internal combustion engines (ICEs). Its wide flammability limits and high flame speeds enable ultra-lean combustion, which is a promising strategy for reducing NOx emissions and improving thermal efficiency. However, lean hydrogen-air flames, characterized by low Lewis numbers, experience thermo-diffusive instabilities that can significantly influence flame propagation and emissions. To address this challenge, it is crucial to gain a deep understanding of the fundamental flame dynamics of hydrogen-fueled engines. This study uses high-speed planar SO2-LIF to investigate the evolutions of the early flame kernels in hydrogen and methane flames, and analyze the intricate interplay between flame characteristics, such as flame curvature, the gradients of SO2-LIF intensity, tortuosity of flame boundary, the equivalent flame speed, and the turbulent flow field. Differential diffusion effects are particularly pronounced in H2 flames, resulting in more significant flame wrinkling. In contrast, CH4 flames, while exhibiting smoother flame boundaries, are more sensitive to turbulence, resulting in increased wrinkling, especially under stronger turbulence conditions. The higher correlation between curvature and gradient of H2 flames indicates enhanced reactivity at the flame troughs, leading to faster flame propagation. However, increased turbulence can mitigate these effects. Hydrogen flames consistently exhibit higher equivalent flame speeds due to their higher thermo-diffusivity, and both hydrogen and methane flames accelerate under high turbulence conditions. These findings provide valuable insights into the distinct flame behaviors of hydrogen and methane, highlighting the importance of understanding the interactions between thermo-diffusive effects and turbulence in hydrogen-fueled engine combustion.
A University of Texas at Dallas researcher and his collaborators have developed an artificial intelligence (AI)-assisted tool that makes it possible for visually impaired computer programmers to create, edit and verify 3D models independently.
The tool, A11yShape, addresses a challenge for blind and low-vision programmers by providing a method for editing and verifying complex models without assistance from sighted individuals. The first part of the tool’s name is a numeronym, a number-based contracted word that stands for “accessibility” and is pronounced “al-ee.”
Even small, open-source AI chatbots can be effective political persuaders, according to a new study. The findings provide a comprehensive empirical map of the mechanisms behind AI political persuasion, revealing that post-training and prompting – not model scale and personalization – are the dominant levers. It also reveals evidence of a persuasion-accuracy tradeoff, reshaping how policymakers and researchers should conceptualize the risks of persuasive AI. There is a growing concern amongst many that advances in AI – particularly conversational large language models (LLMs) – may soon give machines significant persuasive power over human beliefs at unprecedented scale. However, just how persuasive these systems truly are, and the underlying mechanisms that make them so, remain largely unknown. To explore these risks, Kobi Hackenburg and colleagues investigated three central questions: whether larger and more advanced models are inherently more persuasive; whether smaller models can be made highly persuasive through targeted post-training; and which tactics AI systems rely on when attempting to change people’s minds. Hackenburg et al. conducted three large-scale survey experiments involving nearly 77,000 participants who conversed with 19 different LLM models – ranging from small open-source systems to state-of-the-art “frontier” models – on hundreds of political issues. They also tested multiple prompting strategies, several post-training methods, and assessed how each “lever” affected persuasive impact and factual accuracy. According to the findings, model size and personalization (providing the LLM with information about the user) produced small, but measurable effects on persuasion. Post-training techniques and simple prompting strategies, on the other hand, increased persuasiveness dramatically, by as much as 51% and 27%, respectively. Once post-trained, even small, open-source models could rival large frontier models in shifting political attitudes. Hackenburg et al. found AI systems are most persuasive when they deliver information-rich arguments. Roughly half of the variance in persuasion effects across models and methods could be traced to this single factor. However, the authors also discovered a notable tradeoff; models and prompting strategies that were effective in boosting persuasiveness often did so at the expense of truthfulness, showing that optimizing an AI model for influence may inadvertently degrade accuracy. In a Perspective, Lisa Argyle discusses this study and its companion study, published in Nature, in greater detail.
Special note / related paper in Nature: A paper with overlapping authors and on related themes, “Persuading voters using human–artificial intelligence dialogues” will be published in Nature on the same day and time (and embargoed for the same day and time): 2:00 p.m. U.S. Eastern Time on Thursday, 4 December. For the related paper, please refer to the Nature Press Site: http://press.nature.com or contact the Nature Press Office team at press@nature.com.
MIT researchers developed a smarter way for an LLM to allocate computation while it reasons about a problem. Their method lets the model dynamically adjust its computational budget based on the difficulty of the question and the likelihood that each partial solution will lead to the correct answer.