End of the line for Roadrunner supercomputer

March 31, 2013 by Susan Montoya Bryan
Roadrunner supercomputer puts research at a new scale
Credit: LeRoy N. Sanchez, Records Management, Media Services and Operations

It's the end of the line for Roadrunner, a first-of-its-kind collection of processors that once reigned as the world's fastest supercomputer. The $121 million supercomputer, housed at one of the premier U.S. nuclear weapons research laboratories in northern New Mexico, will be decommissioned Sunday.

The reason? The world of supercomputing is evolving and has been replaced with something smaller, faster, more energy efficient and cheaper. Still, officials at Los Alamos National Laboratory say it's among the 25 fastest supercomputers in the world.

"Roadrunner got everyone thinking in new ways about how to build and use a supercomputer," said Gary Grider, who works in the lab's high performance computing division. "Specialized processors are being included in new ways on new systems and being used in novel ways. Our demonstration with Roadrunner caused everyone to pay attention."

In 2008, Roadrunner was first to break the elusive petaflop barrier by processing just over a quadrillion per second.

Los Alamos teamed up with IBM to build Roadrunner from commercially available parts. They ended up with 278 refrigerator-size racks filled with two different types of processors, all linked together by 55 miles (89 kilometers) of . It took nearly two dozen tractor trailer trucks to deliver the supercomputer from New York to northern New Mexico.

The supercomputer has been used over the last five years to model viruses and unseen parts of the universe, to better understand lasers and for nuclear weapons work. That includes simulations aimed at ensuring the safety and reliability of the aging U.S. arsenal.

As part of the U.S. nuclear stockpile stewardship program, researchers used Roadrunner's high-speed calculation capabilities to unravel some of the mysteries of energy flow in weapons.

Los Alamos has been helping pioneer novel computer systems for decades. In 1976, the lab helped with the development of the Cray-1. In 1993, the lab held the title with the Thinking Machine CM-5.

"And to think of where we're going to be in the next 10 to 15 years, it's just mindboggling," said lab spokesman Kevin Roark.

Right now, Los Alamos—along with scientists at Sandia National Laboratories in Albuquerque and Lawrence Livermore National Laboratory in California—is using a dubbed Cielo. Installed in 2010, it's slightly faster than Roadrunner, takes up less space and came in at just under $54 million.

Roark said in the next 10 to 20 years, it's expected that the world's supercomputers will be capable of breaking the exascale barrier, or one quintillion calculations per second.

There will be no ceremony when Roadrunner is switched off Sunday, but lab officials said researchers will spend the next month experimenting with its operating system and techniques for compressing memory before dismantling begins. They say the work could help guide the design of future supercomputers.

Explore further: Los Alamos Supercomputer Remains Fastest in World


Related Stories

Los Alamos Supercomputer Remains Fastest in World

November 18, 2008

The latest list of the TOP500 computers in the world has been announced at the SC08 supercomputing conference in Austin, Texas, and continued to place the Roadrunner supercomputer at Los Alamos National Laboratory as fastest ...

Roadrunner supercomputer puts research at a new scale

June 12, 2008

Less than a week after Los Alamos National Laboratory's Roadrunner supercomputer began operating at world-record petaflop/s data-processing speeds, Los Alamos researchers are already using the computer to mimic extremely ...

IBM's Blue Gene Pulls Away from the Pack

November 12, 2007

IBM’s Blue Gene/L supercomputer sprinted to a new world record as it continued its four-year domination of the official TOP500 Supercomputer Sites list. The world’s fastest computer at Lawrence Livermore National Laboratory ...

Recommended for you

What can snakes teach us about engineering friction?

May 21, 2018

If you want to know how to make a sneaker with better traction, just ask a snake. That's the theory driving the research of Hisham Abdel-Aal, Ph.D., an associate teaching professor from Drexel University's College of Engineering ...

Flexible, highly efficient multimodal energy harvesting

May 21, 2018

A 10-fold increase in the ability to harvest mechanical and thermal energy over standard piezoelectric composites may be possible using a piezoelectric ceramic foam supported by a flexible polymer support, according to Penn ...


Adjust slider to filter visible comments by rank

Display comments: newest first

3 / 5 (9) Mar 31, 2013
One would think that this could be used by some other branch of government, a university or for some sort of group computational services. Ours is a disposable society that is living way beyond it's means.
2.8 / 5 (6) Mar 31, 2013
One would think that this could be used by some other branch of government, a university or for some sort of group computational services. Ours is a disposable society that is living way beyond it's means.
Why? It would cost them more and take more time to do it on this than on newer computers. Evolution requires death in order to function. Maintaining obsolescence is wasteful and inefficient. Progress is essential.
3.7 / 5 (6) Mar 31, 2013
It is still faster than 90% of the computers out there. One dedicated fiber optic link to the outside world and you have a great, paid for, research computer available for use by corporations and universities.
5 / 5 (3) Mar 31, 2013
It's expensive to run, takes up a lot of space, and only gets comparatively slower with time. Further, super-processing facilities need special housing, which are expensive to build in themselves. Anyhow, I wouldn't be surprised if most of the parts are donated or sold.
1 / 5 (2) Mar 31, 2013
PLUS aging supercomputers take more man-hours to operate and maintain, and cannot run on newer software developed for newer computers. And who would use them if they could get cheaper time on faster computers? Who said there was a shortage of time on newer computers?
not rated yet Mar 31, 2013
In terms of peak theoretical throughput it is fast. In terms of real world application speed, it only reaches 10 to 20 percent of it's peak.

This is typical for heterogeneous computing environments.

4 / 5 (1) Mar 31, 2013
I doubt they will be simply throwing it in the garbage. One of the fastest of 25 in the world wont be wasted, I am sure there will be many bidders for it..
not rated yet Apr 01, 2013
I hope all of its parts are recycled or reused.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.