Social justice should not be tokenistic but at the heart of global restoration efforts
Peer-Reviewed Publication
Updates every hour. Last Updated: 5-Dec-2025 08:11 ET (5-Dec-2025 13:11 GMT/UTC)
Social justice must be at the heart of global restoration initiatives - and not “superficial” or “tokenistic” - if ecosystem degradation is to be addressed effectively, according to new research.
Led by researchers the University of East Anglia (UEA) the study sought to explore what can make restoration effective for people and nature. Publishing their findings today in Nature Sustainability, they argue that placing social justice at the centre of restoration practice remains vital to success, with ecological targets aligned to local social, economic and cultural ones.
A new study is the first to comprehensively map three decades of income inequality data within 151 nations around the world. Despite finding that income inequality is worsening for half the world’s people, the study also indicates that effective policy may be helping to bridge the gap in ‘bright spots’ –– in administrative areas that account for around a third of the global population.
A new study reveals how our brains store and change memories. Researchers investigated episodic memory - the kind of memory we use to recall personal experiences like a birthday party or holiday.
They showed that memories aren’t just stored like files in a computer. Instead, they’re made up of different parts. And while some are active and easy to recall, others stay hidden until something triggers them.
Importantly, the review shows that for something to count as a real memory, it must be linked to a real event from the past. But even then, the memory we recall might not be a perfect copy. It can include extra details from our general knowledge, past experiences, or even the situation we’re in when we remember it.
The team say their work has important implications for mental health, education, and legal settings where memory plays a key role.
In a new study, Francisco Polidoro Jr., professor of management at Texas McCombs, finds present-day insights in an old innovation story: how NASA developed its space shuttles, which flew from 1981 to 2011. The lessons can inform today's rocketeers and anyone looking for breakthroughs cutting-edge fields, from phones to pharmaceuticals.
Rather than a straightforward sequence, NASA used a meandering knowledge-building process, he finds. That process allowed it to systematically explore rocket features, both individually and together.
“With breakthrough inventions, the number of combinations of possible features quickly explodes, and you just can’t test all of them,” Polidoro says. “It has to be a much more selective search process.”
Even small, open-source AI chatbots can be effective political persuaders, according to a new study. The findings provide a comprehensive empirical map of the mechanisms behind AI political persuasion, revealing that post-training and prompting – not model scale and personalization – are the dominant levers. It also reveals evidence of a persuasion-accuracy tradeoff, reshaping how policymakers and researchers should conceptualize the risks of persuasive AI. There is a growing concern amongst many that advances in AI – particularly conversational large language models (LLMs) – may soon give machines significant persuasive power over human beliefs at unprecedented scale. However, just how persuasive these systems truly are, and the underlying mechanisms that make them so, remain largely unknown. To explore these risks, Kobi Hackenburg and colleagues investigated three central questions: whether larger and more advanced models are inherently more persuasive; whether smaller models can be made highly persuasive through targeted post-training; and which tactics AI systems rely on when attempting to change people’s minds. Hackenburg et al. conducted three large-scale survey experiments involving nearly 77,000 participants who conversed with 19 different LLM models – ranging from small open-source systems to state-of-the-art “frontier” models – on hundreds of political issues. They also tested multiple prompting strategies, several post-training methods, and assessed how each “lever” affected persuasive impact and factual accuracy. According to the findings, model size and personalization (providing the LLM with information about the user) produced small, but measurable effects on persuasion. Post-training techniques and simple prompting strategies, on the other hand, increased persuasiveness dramatically, by as much as 51% and 27%, respectively. Once post-trained, even small, open-source models could rival large frontier models in shifting political attitudes. Hackenburg et al. found AI systems are most persuasive when they deliver information-rich arguments. Roughly half of the variance in persuasion effects across models and methods could be traced to this single factor. However, the authors also discovered a notable tradeoff; models and prompting strategies that were effective in boosting persuasiveness often did so at the expense of truthfulness, showing that optimizing an AI model for influence may inadvertently degrade accuracy. In a Perspective, Lisa Argyle discusses this study and its companion study, published in Nature, in greater detail.
Special note / related paper in Nature: A paper with overlapping authors and on related themes, “Persuading voters using human–artificial intelligence dialogues” will be published in Nature on the same day and time (and embargoed for the same day and time): 2:00 p.m. U.S. Eastern Time on Thursday, 4 December. For the related paper, please refer to the Nature Press Site: http://press.nature.com or contact the Nature Press Office team at press@nature.com.