IBM To Build Supercomputer For U.S. Government

February 3, 2009 by John Messina weblog

( -- The U.S. Government has contracted out IBM to build a massive supercomputer bigger than any supercomputer out there. The supercomputer system, called Sequoia, will be capable of delivering 20 petaflops (1,000 trillion sustained floating-point operations per second) and is being built for the U.S. Department of Energy.

The U.S. Department of Energy will use the supercomputer in their nuclear stockpile research. The fastest system they have today is capable of delivering up to 1 petaflop. The system will be located at the Lawrence Livermore National Laboratory in Livermore, Calif., and is expected to be up and running in 2012.

The Sequoia system will also be used for a massive power upgrade at Lawrence Livermore, which is increasing the amount of electricity available for all their computing systems from 12.5 megawatts to 30 megawatts. This power upgrade will require running additional power lines into the facility. Sequoia alone is expected to use approximately 6 megawatts.

This Sequoia computer is so massive; IBM is building a 500 teraflop system, called Dawn that will help Researchers prepare for the larger 20 petaflop system.

The Sequoia system will be using all IBM Power chips and deploy approximately 1.6 million processing cores, running Linux OS. IBM is still developing a 45-nanometer chip for the system that may contain 8, 16, or more cores. The final chip configuration has not been determined yet but the system will have 1.6TB of memory when all completed.

IBM plans to build this supercomputer at their Rochester, Minn., plant. The cost of the system has not been disclosed.

© 2009

Explore further: Bug repellent for supercomputers proves effective

Related Stories

Bug repellent for supercomputers proves effective

November 14, 2012

(—Lawrence Livermore National Laboratory (LLNL) researchers have used the Stack Trace Analysis Tool (STAT), a highly scalable, lightweight tool to debug a program running more than one million MPI processes on ...

Sequoia supercomputer transitions to classified work

April 18, 2013

The National Nuclear Security Administration (NNSA) today announced that its Sequoia supercomputer at Lawrence Livermore National Laboratory (LLNL) has completed its transition to classified computing in support of the Stockpile ...

New simulation speed record on Sequoia supercomputer

May 1, 2013

( —Computer scientists at Lawrence Livermore National Laboratory (LLNL) and Rensselaer Polytechnic Institute have set a high performance computing speed record that opens the way to the scientific exploration of ...

Researchers break million-core supercomputer barrier

January 28, 2013

Stanford Engineering's Center for Turbulence Research (CTR) has set a new record in computational science by successfully using a supercomputer with more than one million computing cores to solve a complex fluid dynamics ...

Energy efficient brain simulator outperforms supercomputers

April 24, 2013

In November 2012, IBM announced that it had used the Blue Gene/Q Sequoia supercomputer to achieve an unprecedented simulation of more than 530 billion neurons. The Blue Gene/Q Sequoia accomplished this feat thanks to its ...

Recommended for you

Microsoft aims at Apple with high-end PCs, 3D software

October 26, 2016

Microsoft launched a new consumer offensive Wednesday, unveiling a high-end computer that challenges the Apple iMac along with an updated Windows operating system that showcases three-dimensional content and "mixed reality."

Making it easier to collaborate on code

October 26, 2016

Git is an open-source system with a polarizing reputation among programmers. It's a powerful tool to help developers track changes to code, but many view it as prohibitively difficult to use.


Adjust slider to filter visible comments by rank

Display comments: newest first

1.3 / 5 (7) Feb 03, 2009
oh crap...have the manufacturer with the absolute worst hardware failure record (next to Sun anyways) build the largest supercomputer eveer. Yea...not a smart idea...take it from a person who has worked EXTENSIVELY with all major manufacturer's servers.
2.5 / 5 (2) Feb 03, 2009
1.6TB of memory? That will be nothing in 2012. ;D
1.5 / 5 (2) Feb 03, 2009

1.6TB of memory and you think that is nothing?! Even for 2012 (only 3 years away) that is huge and really takes a supercomputer to deal with a memory as big as that.
3.8 / 5 (4) Feb 03, 2009
I think the 1.6 TB of memory is for each node, the whole system would have petabytes. As to the reliability issue, IBM big mainframes are a different order of fish than the little servers they mass produce. When IBM builds a mainframe, they are reliable, not like the server world. Does ANYONE make a reliable server? Just like the phones nowadays, remember the old AT&T bricks? You could throw them across the room and they would still work. The so-called phones you get at Best Buy and Target and such (home phones, not cells) suck so much, they seem to be designed by freshmen or high school students with no concept of either reliability or usability.
1 / 5 (1) Feb 03, 2009
The system will have 1.6TB of memory.
5 / 5 (3) Feb 03, 2009
Unreliable? Give me a break.... IBM Mainframes and System i Power Servers have MTBF measured in decades. Their mainframes don't break... period!
1 / 5 (4) Feb 03, 2009
Lets put the worlds top nuclear calculations in the hand of a computer that is unreliable lol. Oops! The simulations never mentioned that half the world would be obliterated..
5 / 5 (3) Feb 03, 2009
LOL 1.6 million processing cores!? I would love having just one of those new 16 core processors that IBM is developing.
not rated yet Feb 04, 2009
It would be a lot cooler if they named it Skynet :D
not rated yet Feb 04, 2009
1.6TB of memory and you think that is nothing?!

Yes! Even the Earth Simulator(a mere 0.036 petaflops) had 10 TB of memory.

Blue gene/L had 32 TB of RAM and 900 TB of disc space.
not rated yet Feb 04, 2009
1.6TB for 1.6 million cores? is that like 1MB on-chip cache per core, not counting external RAM?
not rated yet Feb 05, 2009
I wonder what would they do with that power. I have my doubts about it :(
not rated yet Feb 05, 2009
make the most delicious blueberry muffin recipes ever. Also, maybe create the infinite improbability drive?
not rated yet Feb 07, 2009
The article is off by a factor 1024, the Sequoia will have 1.6 _petabytes_ of RAM.

I wonder what would they do with that power. I have my doubts about it :(

Precisely what they're claiming they'll use if for probably. They're a signatory of the comprehensive nuclear testban treaty so to "make sure the nations stockpile of nuclear weapons are safe and effective" they're going to keep the different aspects of them in computer models using ever larger computers as they become available. Also known as "stockpile stewardship".

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.