Public Release: 

Need for Speed: NSF Pursues Petaflop Computers

National Science Foundation

Kids often race their bicycles, pedaling madly to move ever faster. Then they advance to sedans, but covet sports cars, still wanting to push that envelope of speed.

Computer scientists are no different.

The fastest computers created today are capable of speeds of about a teraflop--a trillion operations per second. Already researchers are looking far ahead, yearning for computers a thousand times faster.

The National Science Foundation, in conjunction with NASA and DARPA, have funded eight research projects to creatively approach a petaflop. These pilot projects will be presented at a workshop this Sunday, Oct. 27, at the Frontiers '96 conference in Annapolis, Maryland.

To put the speeds in terms that people can understand: if the speeds of the world's fastest computers just now being built are like the sailing ships Christopher Columbus used to cross the Atlantic, space shuttle speeds are the goal of this research project. Right now, computer speeds are limited by memory storage and by how fast that memory can be transferred to the working parts of the computer. Even with those issues solved, computers operating at petaflop speeds must be massively parallel--any application must be broken into a million pieces, all calculated at once. To wait to solve problems sequentially slows the computer down.

"The first petaflop computers are going to be difficult to use. One of the goals of this project is to see how friendly can we keep them. You don't want computers only a few experts can use. The architectures must support a reasonable programming model without slowing down," said John Van Rosendale, NSF program manager leading the project.

But why would anyone need a thousand trillion operations per second? Any number of applications are already apparent, from real time nuclear magnetic resonance imaging during surgery, to computer based drug design, astrophysical simulation and modeling of environmental pollution and long term climate changes.

"Until the Internet arrived, we had no real appreciation of its impact. Petaflop computers may be like that: we have only a limited sense of the kind of applications this technology will enable," Van Rosendale said.

The eight Pursuing a Petaflop projects are:

  • A Flexible Architecture for Executing Component Software at 100 Teraflops; Andrew A. Chien and Rajesh K. Gupta, University of Illinois at Urbana-Champaign

  • Point Designs for 100 Teraflop Computers Using PIM Technologies; Peter M. Kogge, Steven C. Bass, Jay B. Brockman, Danny Z. Chen and Edwin Hsing-Mean Sha; University of Notre Dame

  • Architecture, Algorithms and Applications for Future Generation Supercomputers; Vipin Kumar and Ahmed Sameh; University of Minnesota

  • Design Studies on Petaflops Special-Purpose Hardware for Astrophysical Particle Simulations; Stephen L. W. McMillan, Drexel University; Piet Hut, Institute for Advanced Study, Princeton; Junichiro Makino, University of Tokyo; Michael L. Normal, University of Illinois at Urbana-Champaign; Frank J. Summers, Princeton University

  • Hybrid Technology Multi-Threaded Architecture; Paul Messina and Thomas Sterling; California Institute of Technology

  • Hierarchical Processors-and-Memory Architecture for High Performance Computing; Jose A.B. Fortes and Rudolph Eigenmann, Purdue University; Valerie Taylor, Northwestern University

  • The Illinois Aggressive Cache-Only Memory Architecture Multiprocessor; Josep Torrellas and David Padua, University of Illinois at Urbana-Champaign

  • A Scalable-Feasible Parallel Computer Implementing Electronic and Optical Interconnections for 156 TeraOPS Minimum Performance; Sotirios G. Ziavras and Haim Grebel, New Jersey Institute of Technology; Anthony T. Chronopoulos, Wayne State University


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.