IBM To Build Supercomputer For U.S. Government

Feb 03, 2009 by John Messina weblog

( -- The U.S. Government has contracted out IBM to build a massive supercomputer bigger than any supercomputer out there. The supercomputer system, called Sequoia, will be capable of delivering 20 petaflops (1,000 trillion sustained floating-point operations per second) and is being built for the U.S. Department of Energy.

The U.S. Department of Energy will use the supercomputer in their nuclear stockpile research. The fastest system they have today is capable of delivering up to 1 petaflop. The system will be located at the Lawrence Livermore National Laboratory in Livermore, Calif., and is expected to be up and running in 2012.

The Sequoia system will also be used for a massive power upgrade at Lawrence Livermore, which is increasing the amount of electricity available for all their computing systems from 12.5 megawatts to 30 megawatts. This power upgrade will require running additional power lines into the facility. Sequoia alone is expected to use approximately 6 megawatts.

This Sequoia computer is so massive; IBM is building a 500 teraflop system, called Dawn that will help Researchers prepare for the larger 20 petaflop system.

The Sequoia system will be using all IBM Power chips and deploy approximately 1.6 million processing cores, running Linux OS. IBM is still developing a 45-nanometer chip for the system that may contain 8, 16, or more cores. The final chip configuration has not been determined yet but the system will have 1.6TB of memory when all completed.

IBM plans to build this supercomputer at their Rochester, Minn., plant. The cost of the system has not been disclosed.

© 2009

Explore further: Intel takes aim at the mobile market — again

add to favorites email to friend print save as pdf

Related Stories

Pilot water conservation project

Apr 08, 2014

The Laboratory has launched a pilot project to reduce potable water use by using treated groundwater to cool equipment and research facilities at the main site.

Testing virtual nuclear stockpiles

Nov 25, 2013

In 2010 the Pentagon revealed it had a total of 5,113 warheads in its nuclear stockpile, down from a peak of 31,225 at the height of the Cold War in 1967.

New simulation speed record on Sequoia supercomputer

May 01, 2013

( —Computer scientists at Lawrence Livermore National Laboratory (LLNL) and Rensselaer Polytechnic Institute have set a high performance computing speed record that opens the way to the scientific ...

Sequoia supercomputer transitions to classified work

Apr 18, 2013

The National Nuclear Security Administration (NNSA) today announced that its Sequoia supercomputer at Lawrence Livermore National Laboratory (LLNL) has completed its transition to classified computing in ...

Recommended for you

Amazon says FAA drone approval already obsolete

2 hours ago

The approval federal aviation officials gave last week to test a specific drone design outdoors is already outdated, the company's top policy executive said Tuesday in written testimony to a Senate subcommittee.

Firm combines 3-D printing with ancient foundry method

3 hours ago

A century-old firm that's done custom metal work for some of the nation's most prestigious buildings has combined 3-D printing and an ancient foundry process for a project at the National Archives Building in Washington, ...

User comments : 14

Adjust slider to filter visible comments by rank

Display comments: newest first

1.3 / 5 (7) Feb 03, 2009
oh crap...have the manufacturer with the absolute worst hardware failure record (next to Sun anyways) build the largest supercomputer eveer. Yea...not a smart idea...take it from a person who has worked EXTENSIVELY with all major manufacturer's servers.
2.5 / 5 (2) Feb 03, 2009
1.6TB of memory? That will be nothing in 2012. ;D
1.5 / 5 (2) Feb 03, 2009

1.6TB of memory and you think that is nothing?! Even for 2012 (only 3 years away) that is huge and really takes a supercomputer to deal with a memory as big as that.
3.8 / 5 (4) Feb 03, 2009
I think the 1.6 TB of memory is for each node, the whole system would have petabytes. As to the reliability issue, IBM big mainframes are a different order of fish than the little servers they mass produce. When IBM builds a mainframe, they are reliable, not like the server world. Does ANYONE make a reliable server? Just like the phones nowadays, remember the old AT&T bricks? You could throw them across the room and they would still work. The so-called phones you get at Best Buy and Target and such (home phones, not cells) suck so much, they seem to be designed by freshmen or high school students with no concept of either reliability or usability.
1 / 5 (1) Feb 03, 2009
The system will have 1.6TB of memory.
5 / 5 (3) Feb 03, 2009
Unreliable? Give me a break.... IBM Mainframes and System i Power Servers have MTBF measured in decades. Their mainframes don't break... period!
1 / 5 (4) Feb 03, 2009
Lets put the worlds top nuclear calculations in the hand of a computer that is unreliable lol. Oops! The simulations never mentioned that half the world would be obliterated..
5 / 5 (3) Feb 03, 2009
LOL 1.6 million processing cores!? I would love having just one of those new 16 core processors that IBM is developing.
not rated yet Feb 04, 2009
It would be a lot cooler if they named it Skynet :D
not rated yet Feb 04, 2009
1.6TB of memory and you think that is nothing?!

Yes! Even the Earth Simulator(a mere 0.036 petaflops) had 10 TB of memory.

Blue gene/L had 32 TB of RAM and 900 TB of disc space.
not rated yet Feb 04, 2009
1.6TB for 1.6 million cores? is that like 1MB on-chip cache per core, not counting external RAM?
not rated yet Feb 05, 2009
I wonder what would they do with that power. I have my doubts about it :(
not rated yet Feb 05, 2009
make the most delicious blueberry muffin recipes ever. Also, maybe create the infinite improbability drive?
not rated yet Feb 07, 2009
The article is off by a factor 1024, the Sequoia will have 1.6 _petabytes_ of RAM.

I wonder what would they do with that power. I have my doubts about it :(

Precisely what they're claiming they'll use if for probably. They're a signatory of the comprehensive nuclear testban treaty so to "make sure the nations stockpile of nuclear weapons are safe and effective" they're going to keep the different aspects of them in computer models using ever larger computers as they become available. Also known as "stockpile stewardship".

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.