Feature Story | 14-Nov-2025

Largest-ever universe simulation up for supercomputing’s highest prize

DOE researchers are recognized for simulations on the Frontier supercomputer that capture the interactions between gravity and gas spanning 15 billion light-years of cosmic space

DOE/Oak Ridge National Laboratory

Last fall, as the finalists for the Association for Computing Machinery’s 2024 Gordon Bell Prize were waiting to hear their names called, researchers at the Department of Energy’s Argonne and Oak Ridge National Laboratories had just finished running the largest astrophysical simulation of the universe ever conducted — and now, it could be their year to win it all.

Winners of the Gordon Bell Prize will be announced at this year’s International Conference for High Performance Computing, Networking, Storage, and Analysis held in St. Louis, Missouri, on Nov. 16-21.

The calculations were performed on ORNL’s Frontier supercomputer, the world’s most powerful supercomputer for open science. The achievement sets a new benchmark for simulating the universe, enabling scientists to study atomic matter and dark matter simultaneously. The simulations tracked 4 trillion particles, represented across 15 billion light-years of space — delivering a 15-fold leap in capability over the previous state-of-the-art simulations.

“This really was a decade of work,” said Argonne’s Nick Frontiere, who led the supercomputer simulations. “High-performance computing is a niche field, and the Gordon Bell Prize is the ultimate recognition that you not only know how to use these machines, but have done so in a truly groundbreaking way. So even as a finalist, I already feel like our team has won something very special.”

The Frontier supercomputer has a top speed of 2 exaflops per second — capable of roughly two billion-billion calculations in the blink of an eye. The team pushed Frontier as close to the limits as possible by harnessing nearly 9,000 of its 9,402 nodes, powered by 37,888 AMD Instinct™ MI250X GPUs.

Frontiere said the use of four key innovations was instrumental in achieving the record-breaking performance.

  • GPU Tree Solver — a GPU-optimized tree data structure that computes all local force interactions for gas and dark matter.
  • Warp Splitting — an algorithm developed by Frontiere that splits each GPU warp so threads can share partial results, cutting redundancy and speeding physics calculations.
  • In situ GPU-Accelerated End-to-End Analysis — allows data to be processed directly on the GPU while the simulation is running, reducing the need to store massive data sets for post processing.
  • Multi-Tiered I/O — buffers simulation data on fast node-local Non-Volatile Memory Express (NVMe) drives before asynchronously transferring it to the shared parallel file system, overlapping I/O with execution to minimize overhead.

“When we ran these simulations 10 years ago on Titan, our record speed was around 20 petaflops using much simpler physics, and that was the fastest machine at the time. Now, on Frontier, we’re achieving over 500 petaflops with full astrophysics models included,” Frontiere said. “If we were to run the same simulation using only CPUs, it would take a year. It shows how powerful GPUs are and how important it is to have software that can really take advantage of them.”

The novel simulation used the supercomputer code HACC, short for Hardware/Hybrid Accelerated Cosmology Code. Originally designed to run on petascale supercomputers, HACC was optimized through ExaSky, a special project led by Salman Habib, Argonne Division Director for Computational Science, as part of the Exascale Computing Project (ECP). The goal of ExaSky was to run HACC more than 50 times faster than its performance on former world-leading machines such as the Titan supercomputer.

“The aim was not only to run faster but also to add much richer physics capabilities and associated performance analysis tools,” Habib said. He added that the collaborative ethos of ECP — which involved science teams, software providers and industry, all working together — was essential in moving the project forward.

"The recent run on Frontier has provided a massive data set that will be mined for years to come," Habib added. "In the meantime, we are already thinking of what to do next, which is a true testament to the power of exascale computing."

In addition to Frontiere and Habib, the HACC team members involved in the achievement and other simulations building up to the work on Frontier include J.D. Emberson, Michael Buehlmann, Esteban Rangel, Katrin Heitmann, Patricia Larsen, Vitali Morozov, Adrian Pope, Claude-André Faucher-Giguère, Antigoni Georgiadou, Damien Lebrun-Grandié, and Andrey Prokopenko.

Related Publication:
Nicholas Frontiere, et al., “Cosmological Hydrodynamics at Exascale: A Trillion-Particle Leap in Capability,” arXiv (2025). https://doi.org/10.48550/arXiv.2510.03557

The Frontier supercomputer is managed by ORNL’s Oak Ridge Leadership Computing Facility, a DOE Office of Science user facility.

UT-Battelle manages ORNL for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit energy.gov/science.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.