IBM To Build Supercomputer For U.S. Government

Feb 03, 2009 by John Messina weblog

(PhysOrg.com) -- The U.S. Government has contracted out IBM to build a massive supercomputer bigger than any supercomputer out there. The supercomputer system, called Sequoia, will be capable of delivering 20 petaflops (1,000 trillion sustained floating-point operations per second) and is being built for the U.S. Department of Energy.

The U.S. Department of Energy will use the supercomputer in their nuclear stockpile research. The fastest system they have today is capable of delivering up to 1 petaflop. The system will be located at the Lawrence Livermore National Laboratory in Livermore, Calif., and is expected to be up and running in 2012.

The Sequoia system will also be used for a massive power upgrade at Lawrence Livermore, which is increasing the amount of electricity available for all their computing systems from 12.5 megawatts to 30 megawatts. This power upgrade will require running additional power lines into the facility. Sequoia alone is expected to use approximately 6 megawatts.

This Sequoia computer is so massive; IBM is building a 500 teraflop system, called Dawn that will help Researchers prepare for the larger 20 petaflop system.

The Sequoia system will be using all IBM Power chips and deploy approximately 1.6 million processing cores, running Linux OS. IBM is still developing a 45-nanometer chip for the system that may contain 8, 16, or more cores. The final chip configuration has not been determined yet but the system will have 1.6TB of memory when all completed.

IBM plans to build this supercomputer at their Rochester, Minn., plant. The cost of the system has not been disclosed.

© 2009 PhysOrg.com

Explore further: Twitpic to stay alive with new owner

add to favorites email to friend print save as pdf

Related Stories

Facebook dressed down over 'real names' policy

6 hours ago

Facebook says it temporarily restored hundreds of deleted profiles of self-described drag queens and others, but declined to change a policy requiring account holders to use their real names rather than drag names such as ...

Far more displaced by disasters than conflict

7 hours ago

Disasters last year displaced three times more people than violent conflicts, showing the urgent need to improve resilience for vulnerable people when fighting climate change, according to a study issued ...

Recommended for you

Team improves solar-cell efficiency

11 hours ago

New light has been shed on solar power generation using devices made with polymers, thanks to a collaboration between scientists in the University of Chicago's chemistry department, the Institute for Molecular ...

Calif. teachers fund to boost clean energy bets

11 hours ago

The California State Teachers' Retirement System says it plans to increase its investments in clean energy and technology to $3.7 billion, from $1.4 billion, over the next five years.

Alibaba surges in Wall Street debut

11 hours ago

A buying frenzy sent Alibaba shares sharply higher Friday as the Chinese online giant made its historic Wall Street trading debut.

User comments : 14

Adjust slider to filter visible comments by rank

Display comments: newest first

LuckyBrandon
1.3 / 5 (7) Feb 03, 2009
oh crap...have the manufacturer with the absolute worst hardware failure record (next to Sun anyways) build the largest supercomputer eveer. Yea...not a smart idea...take it from a person who has worked EXTENSIVELY with all major manufacturer's servers.
moj85
2.5 / 5 (2) Feb 03, 2009
1.6TB of memory? That will be nothing in 2012. ;D
OregonWind
1.5 / 5 (2) Feb 03, 2009
moj85

1.6TB of memory and you think that is nothing?! Even for 2012 (only 3 years away) that is huge and really takes a supercomputer to deal with a memory as big as that.
Sonhouse
3.8 / 5 (4) Feb 03, 2009
I think the 1.6 TB of memory is for each node, the whole system would have petabytes. As to the reliability issue, IBM big mainframes are a different order of fish than the little servers they mass produce. When IBM builds a mainframe, they are reliable, not like the server world. Does ANYONE make a reliable server? Just like the phones nowadays, remember the old AT&T bricks? You could throw them across the room and they would still work. The so-called phones you get at Best Buy and Target and such (home phones, not cells) suck so much, they seem to be designed by freshmen or high school students with no concept of either reliability or usability.
OregonWind
1 / 5 (1) Feb 03, 2009
The system will have 1.6TB of memory.
Chey
5 / 5 (3) Feb 03, 2009
Unreliable? Give me a break.... IBM Mainframes and System i Power Servers have MTBF measured in decades. Their mainframes don't break... period!
Bob_Kob
1 / 5 (4) Feb 03, 2009
Lets put the worlds top nuclear calculations in the hand of a computer that is unreliable lol. Oops! The simulations never mentioned that half the world would be obliterated..
columbiaman
5 / 5 (3) Feb 03, 2009
LOL 1.6 million processing cores!? I would love having just one of those new 16 core processors that IBM is developing.
Szkeptik
not rated yet Feb 04, 2009
It would be a lot cooler if they named it Skynet :D
Soylent
not rated yet Feb 04, 2009
1.6TB of memory and you think that is nothing?!


Yes! Even the Earth Simulator(a mere 0.036 petaflops) had 10 TB of memory.

Blue gene/L had 32 TB of RAM and 900 TB of disc space.
Palli
not rated yet Feb 04, 2009
1.6TB for 1.6 million cores? is that like 1MB on-chip cache per core, not counting external RAM?
denijane
not rated yet Feb 05, 2009
I wonder what would they do with that power. I have my doubts about it :(
moj85
not rated yet Feb 05, 2009
make the most delicious blueberry muffin recipes ever. Also, maybe create the infinite improbability drive?
Soylent
not rated yet Feb 07, 2009
The article is off by a factor 1024, the Sequoia will have 1.6 _petabytes_ of RAM.

I wonder what would they do with that power. I have my doubts about it :(


Precisely what they're claiming they'll use if for probably. They're a signatory of the comprehensive nuclear testban treaty so to "make sure the nations stockpile of nuclear weapons are safe and effective" they're going to keep the different aspects of them in computer models using ever larger computers as they become available. Also known as "stockpile stewardship".