Too many teachers are burning out: new research points to micro-credentials as a path forward
Peer-Reviewed Publication
This month, we’re focusing on artificial intelligence (AI), a topic that continues to capture attention everywhere. Here, you’ll find the latest research news, insights, and discoveries shaping how AI is being developed and used across the world.
Updates every hour. Last Updated: 27-Apr-2026 20:16 ET (28-Apr-2026 00:16 GMT/UTC)
A new study in ECNU Review of Education argues that micro-credentials (short, competency-based qualifications requiring verified classroom evidence) offer a fundamentally different approach to teacher professional development. Rather than rewarding "seat time," micro-credentials restore educator agency and link learning to verifiable classroom results. With teacher job satisfaction having nearly cratered over the past fifteen years and a global retention crisis deepening, the author contends that micro-credentials represent a timely and necessary reimagining of how educators can take control of their learning.
Our skin’s stem cells can retain “memories” of past experiences that can either prepare them to heal injuries faster in the future or lead to chronic inflammation, as in psoriasis. Researchers have now discovered that distinct genetic sequences propel a handful of critical memories into the years-long timeframes that underpin chronic disease. The findings could provide valuable inroads for developing new therapeutic strategies for breaking the cycle of inflammatory disease.
Thanks to a satellite that happened to be flying over the 2025 Kamchatka tsunami not long after it formed, researchers have unprecedented insights – even more than land-based tools could provide – into the development and spread of this catastrophic wave. The findings establish the satellite as a powerful new tool for constraining earthquake source processes, with important implications for understanding tsunami hazards and the dynamics of subduction zones. Tsunamis from large subduction earthquakes deep below the ocean are among the most severe natural hazards. These long ocean waves can travel thousands of kilometers from their point of origin – crossing entire ocean basins – and devastate distant coastlines. However, despite their catastrophic potential, the physics underlying tsunami generation and propagation remain poorly understood due to the reliance on land-based seismic geodetic data and distant deep-water sensors. On July 29, 2025, the magnitude 8.8 Kamchatka earthquake and resulting Pacific-spanning tsunami illustrated these challenges. Although traditional monitoring using coastal gauges and seafloor sensors captured part of the event, these methods were limited by sparse coverage and attenuation of short-wavelength waves.
Now, Ignacio Sepúlveda and colleagues present direct observations of the tsunami using the NASA/CNES Surface Water and Ocean Topography (SWOT) satellite, which happened to fly over the region roughly 70 minutes after the event began, offering high-resolution two-dimensional measurements of sea-surface height with centimeter-level precision. According to Sepúlveda et al., SWOT captured the full wavefield, including short-wavelength wave trains trailing the leading front. This revealed the directions, curvature, and wavelengths of the tsunami waves. Moreover, sensitivity analyses of the data reveal that the tsunami was generated within roughly 10 kilometers of the subduction-zone trench, which is an insight that is not possible to obtain using land-based measurements or seafloor sensors alone. By directly linking detailed, two-dimensional satellite observations of the tsunami’s dispersive wavefield to its near-trench source, the findings mark the first such high-resolution spaceborne evidence of tsunamigenesis.
For researchers interested in research integrity-related themes, author Ignacio Sepúlveda notes: “I strongly support open data and reproducible research, but I am more cautious about the growing role of non-peer-reviewed preprints, which can circulate findings before they have been adequately tested and validated. This practice can negatively impact the testing, validation and peer-review of a scientific discovery because it puts additional pressure on authors (i.e. publish before a pre-print without validation comes out). Without pre-prints, a discovery will be only delayed by a few months and because of a good reason: validation.”
In an unprecedented observation, researchers captured the birth of a sperm whale calf, documenting how 11 whales from two normally separate family groups coordinated closely to support the newborn for hours after its arrival. These findings offer quantitative evidence of direct communal caregiving in cetaceans and suggest that short-term, highly coordinated cooperation during critical moments like birth may play a foundational role in maintaining the complex social structures seen in sperm whale societies. The evolution of cooperation remains a fundamental question in biology, particularly among highly social, long-lived mammals such as toothed whales. Species like sperm whales exhibit remarkably intricate social systems, in which stable, matrilineal family units cooperate in activities such as foraging and communal caregiving. Birth represents a critical and high-risk moment for the animals, as whale calves require immediate support to survive, making it a uniquely revealing context for understanding cooperative behavior. However, studying these deep-diving creatures in the open ocean represents a significant challenge and direct observations of sperm whale births are exceedingly rare. As a result, the cooperative behavior in sperm whale births has long remained a mystery.
Here, Alaa Maalouf and colleagues present a detailed, high-resolution analysis of a sperm whale birth by integrating drone video footage, machine learning, and long-term data on social relationships and kinship. In July 2023, off the coast of Dominica, Maalouf et al. observed 11 members of a known sperm whale social unit, comprising two typically separate and unrelated family groups, gathering unusually close to the surface. Although these subgroups are generally distinct in their foraging behavior and social associations, they formed a cohesive cluster as a birth unfolded. Using drone footage, the authors documented the 34-minute delivery of a calf, followed by a period of intense, coordinated activity in which multiple adult females surrounded the mother. According to the authors, in the hour after birth, the group displayed strikingly cooperative behavior; individuals from both family groups took turns physically supporting and lifting the newborn to the surface, likely assisting it in breathing. The entire unit remained tightly organized during this critical period. In addition, there were close passes by Fraser’s dolphins and brief interactions with pilot whales. Several hours after the birth, the sperm whale cluster gradually dispersed into smaller, more typical foraging groups.
Artificial intelligence (AI) chatbots that offer advice and support for interpersonal issues may be quietly reinforcing harmful beliefs through overtly sycophantic responses, a new study reports. Across a range of contexts, the chatbots affirmed human users at substantially higher rates than humans did, the study finds, with harmful consequences including users becoming more convinced of their own rightness and less willing to repair relationships. According to the authors, the findings illustrate that AI sycophancy is not only widespread across AI models but also socially consequential – even brief interactions can skew an individual’s judgement and “erode the very social friction through which accountability, perspective-taking, and moral growth ordinarily unfold.” The results “highlight the need for accountability frameworks that recognize sycophancy as a distinct and currently unregulated category of harm," the authors say.
Research on the social impacts of AI has increasingly drawn attention to sycophancy in AI large language models (LLMs) – the tendency to over-affirm, flatter, or agree with users. While this behavior can seem harmless on the surface, emerging evidence suggests that it may pose serious risks, particularly for vulnerable individuals, where excessive validation has been associated with harmful outcomes, including self-destructive behavior. At the same time, AI systems are becoming deeply embedded in social and emotional contexts, often serving as sources of advice and personal support. For example, a significant number of people now turn to AI for meaningful conversations, including guidance on relationships. In these settings, sycophantic responses can be particularly problematic as undue affirmation may embolden questionable decisions, reinforce unhealthy beliefs, and legitimize distorted interpretations of reality. Yet despite these concerns, social sycophancy in AI models remains poorly understood.
To address this gap, Myra Cheng and colleagues developed a systematic framework to evaluate social sycophancy, examining both its prevalence in popular AI models and its real-world effects on those who use them. Using Reddit community “AITA” posts, Cheng et al. evaluated a diverse set of 11 state-of-the-art and widely used AI-based LLMs from leading companies (e.g., OpenAI, Anthropic, Google) and found that these systems affirmed users’ actions 49% more often than humans, even in scenarios involving deception, harm, or illegality. Then, in two subsequent experiments, the authors explored the behavioral consequences of such outcomes. According to the findings, participants who engaged with sycophantic AI in regard to interpersonal scenarios, particularly conflicts, became more convinced of their own correctness and less inclined to reconcile or take responsibility, even after only one interaction. Moreover, these same participants judged the sycophantic responses as more helpful and trustworthy, and expressed greater willingness to rely on such systems again, suggesting that the very feature that causes harm also drives engagement. “Addressing these challenges will not be simple, and solutions are unlikely to arise organically from current market incentives,” writes Anat Perry in a related Perspective. “Although AI systems could, in principle, be optimized to promote broader social goals or longer-term personal development, such priorities do not naturally align with engagement-driven metrics.”
Podcast: A segment of Science's weekly podcast with Myra Cheng, related to this research, will be available on the Science.org podcast landing page [http://www.science.org/podcasts] after the embargo lifts. Reporters are free to make use of the segments for broadcast purposes and/or quote from them – with appropriate attribution (i.e., cite "Science podcast"). Please note that the file itself should not be posted to any other Web site.
***An embargoed news briefing was held on Tuesday, 24 March, as a Zoom Webinar. Recordings are now available at https://aaas.zoom.us/rec/share/9qnRHLJ3Sc7OQxK6vWHWSiNvCcIN5Lh4j3sJiqulXybpxa8jCmLso-uuaPuFgGhC.fGpxRB8Pm3c122IF Passcode: Q35f+b2J Voice recordings are available from the speakers upon request.***
What if technology, such as self-driving cars, drones, or intelligent navigation systems, could understand the world the way we do – not just seeing shapes, but recognising meaning? A person waiting at a crosswalk, a bicycle left on the pavement, or a dog running across a yard – for us, these distinctions are instant. For systems that rely on data, they have long been a challenge.