U.S.Department of Energy Research News
Text-Only | Privacy Policy | Site Map  
Search Releases and Features  

Multimedia Resources
News Releases
Feature Stories
RSS Feed

US Department of Energy National Science Bowl

Back to EurekAlert! A Service of the American Association for the Advancement of Science


DOE science grid

IBM and the Department of Energy's Supercomputing Center are transforming far-flung supercomputers into a utility-like service called the DOE Science Grid

April 1, 2002—IBM and the U.S. Department of Energy's (DOE) National Energy Research Scientific Computing Center (NERSC) recently announced a collaboration to begin deploying the first systems on a nationwide computing grid that will empower researchers to tackle scientific challenges beyond the capability of existing computers.

Beginning with two IBM supercomputers and a massive IBM storage repository, the DOE Science Grid will ultimately grow into a system capable of processing more than five trillion calculations per second and storing information equivalent to 200 times the number of books in the Library of Congress. The collaboration will make the largest unclassified supercomputer and largest data storage system within DOE available via the Science Grid by December 2002—two years sooner than expected.

The DOE Science Grid will also give scientists around the country access to far-flung supercomputers and data storage in the same way that an electrical grid provides consumers with access to widely dispersed power-generating resources.

"Computing and data grids will establish a uniform computing and data handling environment—independent of location—that can be integrated with scientists' work environment in much the same way that the Web provided a way to integrate on-line documents into the scientific work environment," said Horst Simon, director of the NERSC Division at Lawrence Berkeley National Laboratory (Berkeley Lab). "Undertaking such a large and long-term project, we are especially pleased to be working with IBM, which has made grid computing central to its e-business strategy."

Simon added, "Connecting supercomputer centers to grids will provide the scientific community with a much more capable set of computing and data management tools than those available today, and tools that can be used more easily and effectively than today's tools. This should have a substantial productivity benefit for scientific R&D, and will open up entirely new avenues of exploration."

"The DOE Science Grid is a template for the kind of system that can enable partnerships between public institutions and private companies aimed at creating new products and technologies for business," said Peter Ungaro, vice president, high-performance computing, IBM Server Group. "This collaboration between IBM and NERSC is a big step forward in realizing the Grid's promise of delivering computing resources as a utility-like service."

The Emerging Grid

Grids allow geographically distributed organizations to share applications, data and computing resources. An emerging model of computing, Grids are built with clusters of servers joined together over the Internet, using protocols provided by the Globus open source community (Globus.org) and other open technologies, including Linux(R).

The DOE Science Grid's goal is to enhance the ability of DOE scientists to explore the physical world through computational simulation and scientific experiments and analysis of the resulting data. The Science Grid will enable scientists at national laboratories and universities around the country to perform ever-greater calculations, manage and analyze ever-larger datasets, and perform ever-more complex computer modeling necessary for DOE to accomplish its scientific missions. In the future, supercomputers, data storage and experimental facilities at Berkeley Lab, Argonne, Oak Ridge, and Pacific Northwest national laboratories are all expected to be connected to the DOE Science Grid.

The DOE Science Grid will give scientists real-time access to the trillions of bytes of data that are stored at national labs around the country. This kind of seamless access to information is required for large-scale projects such as genomic and astrophysics research, which generate much more data than can be stored in a single location.

As it evolves into a reliable infrastructure supporting scientific R&D, the DOE Science Grid will also facilitate development and use of collaboration tools that speed up research and allow scientists to tackle more complex problems. NERSC is located at Berkeley Lab, which has been developing distributed collaboration and distributed data-handling technology for the past 10 years. This decade-long effort provided some of the precursor grid tools and technologies.

"The combination of NERSC and the DOE Science Grid should provide an unprecedented capability for incorporating high-end simulation and data handling into the scientists' working environment where it can be combined with local compute and data systems, and eventually with the experiments themselves," said Bill Johnston, head of Berkeley Lab's Distributed Systems Department and one of the architects of the DOE Science Grid. "NERSC provides DOE's Office of Science with its major tools for computational simulation and data analysis and storage, so this integration of the most capable computing facilities directly with the scientists' working environment is what will create new levels of scientific capability and productivity."

NERSC, which operates a 3,328-processor IBM supercomputer (currently the third most powerful computer on earth, according to the TOP500 List of Supercomputers), had originally planned to make its high-performance computing systems accessible via the DOE Science Grid by 2004. The collaboration announced today will allow a core group of NERSC's 2,100 users to begin accessing resources via the DOE Science Grid two years earlier than originally planned.

"We have been working closely with IBM since the installation of our IBM supercomputer in 2000. Because we have a common interest in advancing Grid technology, it made sense to work together," said Bill Kramer, who is in charge of NERSC's computer operations. "As DOE's flagship center for unclassified computing, making our resources more easily and more widely accessible via the Grid will enhance research across a broad spectrum of scientific disciplines."

In addition to the large IBM supercomputer system, DOE Science Grid software will be integrated into NERSC's HPSS (High Performance Storage System) archival data storage system, which has a capacity of 1.3 petabytes and is managed using IBM servers. NERSC and IBM have a strong history of working together to bring new technology to bear on the most challenging scientific problems. For example, NERSC and IBM are two of the six development partners that created and improved the HPSS. NERSC also operates a 160-processor IBM Netfinity cluster computer system.

By the end of the year, all three of NERSC's IBM systems are expected to be on the Grid. To do this, IBM will develop its software to be compatible with Globus and other Grid software, and NERSC will then move the software into service. NERSC and IBM will also use the collaboration to identify areas where the Grid software can be improved.

Scientists Already Report Significant Results Early users of the IBM supercomputer have reported important breakthroughs in areas such as understanding metallic magnetism and achieving much higher resolutions for climate models, making them much more useful.

Better Understanding of Magnetic Forces: As part of the extensive testing of the IBM SP system, a DOE research team of scientists from Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, and the Pittsburgh Supercomputing Center used the supercomputer to perform first-principles spin dynamics simulations of the magnetic structure of iron-manganese/cobalt interfaces.

These large-scale quantum mechanical simulations, involving 2016-atom super-cell models, reveal details of the orientational configuration of the magnetic moments at the interface that are unobtainable by any other means. This work is of fundamental importance in improving magnetic multi-layer computer storage and read head devices. Using 2,176 processors on the IBM SP, the team achieved a maximum execution rate of 2.46 teraflop/s, one of the highest levels ever for a code producing significant scientific results.

High-Resolution Global Climate Modeling: Phil Duffy of the Climate and Carbon Cycle Modeling Group at Lawrence Livermore National Laboratory (LLNL) reported that his group used NERSC's IBM SP to run a global climate change simulation at the highest spatial resolution ever used for such a simulation, making the model more useful for studying regional climate change.

Global climate simulations are typically performed on a latitude-longitude grid, with grid cell sizes of about 300 kilometers. Although simulations of this type can provide useful information on continental and larger scales, they cannot provide meaningful information on regional scales, such as for the state of California, Duffy said.

"Thus, coarse-resolution global climate simulations cannot provide information on many of the most important societal impacts of climate change, such as impacts on water resource management, agriculture, human health, etc." Duffy said. "To do this would require simulations with much finer spatial resolution. Using NERSC's new IBM, as well as supercomputers here at LLNL, we have been experimenting with running global climate simulations at 50 km resolution. This is finer resolution than has ever been attempted in a global climate calculation."

Compared to a typical global climate simulation, this 50-km simulation has 32 times more grid cells and takes up to 200 times longer to run on a computer. "Obviously, such a calculation could not even be attempted without access to extraordinary computational resources," Duffy said. "Our goal for the 50-km global climate simulation is to evaluate how well the model simulates the present climate at this resolution. Thus far we have run about three simulated years; preliminary analysis of the results seems to indicate that the model is very robust to a large increase in spatial resolution."


Media contacts: John Buscemi, IBM, jbuscemi@us.ibm.com , (914) 766-4495; Jon Bashor, NERSC, JBashor@lbl.gov, (510) 486-5849

Related Web Links

"IBM Fast-Tracks Open-Source Protocols for DOE Supercomputer," GenomeWeb, March 25, 2002.

Climate and Carbon Cycle Modeling Group, Lawrence Livermore National Laboratory

Magnetic Materials: Bridging Basic and Applied Science , Science Highlights, Lawrence Berkeley National Laboratory

The Globus Project

Top 500 Supercomputers (NERSC's IBM SP is #3 in the list)

Funding: The DOE Science Grid and the National Energy Research Scientific Computing Center (NERSC) is supported by the Department of Energy's Office of Advanced Scientific Computing Research.

The National Energy Research Scientific Computing (NERSC) Center at Lawrence Berkeley National Laboratory is one of the nation's most powerful unclassified computing resources and is a world leader in accelerating scientific discovery through computation.

Lawrence Berkeley National Laboratory is a U.S. Department of Energy laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California.

Author: Jon Bashor is a science writer at Lawrence Berkeley National Laboratory who covers Computing Sciences research at the Lab and for the National Energy Research Scientific Computing Center. For more science news, see Berkeley Lab's Science Beat.


Text-Only | Privacy Policy | Site Map