IBM To Build Supercomputer For U.S. Government

IBM To Build Supercomputer For U.S. Government
(PhysOrg.com) -- The U.S. Government has contracted out IBM to build a massive supercomputer bigger than any supercomputer out there. The supercomputer system, called Sequoia, will be capable of delivering 20 petaflops (1,000 trillion sustained floating-point operations per second) and is being built for the U.S. Department of Energy.

The U.S. Department of Energy will use the supercomputer in their nuclear stockpile research. The fastest system they have today is capable of delivering up to 1 petaflop. The system will be located at the Lawrence Livermore National Laboratory in Livermore, Calif., and is expected to be up and running in 2012.

The Sequoia system will also be used for a massive power upgrade at Lawrence Livermore, which is increasing the amount of electricity available for all their computing systems from 12.5 megawatts to 30 megawatts. This power upgrade will require running additional power lines into the facility. Sequoia alone is expected to use approximately 6 megawatts.

This Sequoia computer is so massive; IBM is building a 500 teraflop system, called Dawn that will help Researchers prepare for the larger 20 petaflop system.

The Sequoia system will be using all IBM Power chips and deploy approximately 1.6 million processing cores, running Linux OS. IBM is still developing a 45-nanometer chip for the system that may contain 8, 16, or more cores. The final chip configuration has not been determined yet but the system will have 1.6TB of memory when all completed.

IBM plans to build this supercomputer at their Rochester, Minn., plant. The cost of the system has not been disclosed.

© 2009 PhysOrg.com


Explore further

Bug repellent for supercomputers proves effective

Citation: IBM To Build Supercomputer For U.S. Government (2009, February 3) retrieved 27 June 2019 from https://phys.org/news/2009-02-ibm-supercomputer.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
0 shares

Feedback to editors

User comments

Feb 03, 2009
1.6TB of memory? That will be nothing in 2012. ;D

Feb 03, 2009
Unreliable? Give me a break.... IBM Mainframes and System i Power Servers have MTBF measured in decades. Their mainframes don't break... period!

Feb 03, 2009
Lets put the worlds top nuclear calculations in the hand of a computer that is unreliable lol. Oops! The simulations never mentioned that half the world would be obliterated..

Feb 03, 2009
LOL 1.6 million processing cores!? I would love having just one of those new 16 core processors that IBM is developing.

Feb 04, 2009
It would be a lot cooler if they named it Skynet :D

Feb 04, 2009
1.6TB of memory and you think that is nothing?!


Yes! Even the Earth Simulator(a mere 0.036 petaflops) had 10 TB of memory.

Blue gene/L had 32 TB of RAM and 900 TB of disc space.

Feb 04, 2009
1.6TB for 1.6 million cores? is that like 1MB on-chip cache per core, not counting external RAM?

Feb 05, 2009
I wonder what would they do with that power. I have my doubts about it :(

Feb 05, 2009
make the most delicious blueberry muffin recipes ever. Also, maybe create the infinite improbability drive?

Feb 07, 2009
The article is off by a factor 1024, the Sequoia will have 1.6 _petabytes_ of RAM.

I wonder what would they do with that power. I have my doubts about it :(


Precisely what they're claiming they'll use if for probably. They're a signatory of the comprehensive nuclear testban treaty so to "make sure the nations stockpile of nuclear weapons are safe and effective" they're going to keep the different aspects of them in computer models using ever larger computers as they become available. Also known as "stockpile stewardship".

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more