NASA Selects IBM for Next-Generation Supercomputer Applications

Jun 07, 2007

On Wednesday, NASA and IBM announced the agency has selected an IBM System p575+ supercomputer for evaluating next-generation technology to meet the agency's future supercomputing requirements. Supercomputers play a critical role in many NASA missions, including new space vehicle design, global climate studies and astrophysics research.

The IBM system is being installed at the NASA Advanced Supercomputing (NAS) facility at the Ames Research Center at Moffett Field, Calif., where it is undergoing testing and evaluation. With 640 computational cores and a peak performance of approximately 5.6 teraflops, the system will augment the agency's existing "Columbia" system, currently ranked as the eighth fastest supercomputer in the world.

A teraflop is a measure of a computer's speed; one teraflop can be expressed as a trillion floating point operations per second.

"With NASA's high-end computing needs expected to continue during the next few years, we need to keep pace with improved technologies. IBM's system meets all the criteria for our base system evaluation, and working closely with them, we will chart out a successful path for the NASA supercomputing environment," said Dr. Piyush Mehrotra, who leads the NAS applications group and is steering the technology upgrade effort.

The NAS supports scientists and engineers throughout the United States who work on projects such as designing spacecraft, improving weather and hurricane models, and understanding the behavior of the sun. Many NASA projects require large, complex calculations and sophisticated mathematical models that can be efficiently handled only by a supercomputer.

"The research undertaken by NASA scientists is allowing engineers to design and build safer, more advanced spacecraft more quickly than ever," said Dave Turek, vice president of Deep Computing for IBM. "Computer simulation technology produces perfect prototypes for virtual testing, reducing the need for physical testing."

The NAS technology upgrade effort used a comprehensive benchmark suite to characterize system performance on NASA-relevant applications and to measure job throughput for a workload in a complex, high-performance computing environment.

The IBM p575+ supercomputer acquisition is the first of a four-phase procurement process that eventually will replace the Columbia supercomputer system. This phased replacement supports the requirements of the agency Strategic Capabilities Assets Program (SCAP) High-End Computing Capability to provide supercomputing capability to meet the needs of NASA's programs and missions.

Source: IBM

Explore further: Intel, SGI test 3M fluids for cooling effects

add to favorites email to friend print save as pdf

Related Stories

Garafolo tests spacecraft seal to verify computer models

Mar 07, 2012

An Akron researcher is designing computer prediction models to test potential new docking seals that will better preserve breathable cabin air for astronauts living aboard the International Space Station and ...

Space science simulation at UNH now better, faster, cheaper

Jun 18, 2008

Cashing in on the underlying technology that seamlessly renders graphics for state-of-the-art video games, space scientists at the University of New Hampshire have bundled together 40 PlayStation3 consoles to affordably simulate ...

Supercomputer Sets New Performance Record

Jun 23, 2006

The world’s fastest supercomputer, BlueGene/L, set a new performance standard on June 22, 2006. Housed at Department of Energy's National Nuclear Security Administration (NNSA) Lawrence Livermore National ...

From shared to distributed memory systems for applications

Sep 23, 2005

Shared-memory computing applications have never taken particularly well to operating on distributed-memory systems, at least until now. A possible solution has emerged, of interest to NASA and IBM, and is being tested on ...

Recommended for you

Simplicity is key to co-operative robots

1 hour ago

A way of making hundreds—or even thousands—of tiny robots cluster to carry out tasks without using any memory or processing power has been developed by engineers at the University of Sheffield, UK.

Freight train industry to miss safety deadline

2 hours ago

The U.S. freight railroad industry says only one-fifth of its track will be equipped with mandatory safety technology to prevent most collisions and derailments by the deadline set by Congress.

IBM posts lower 1Q earnings amid hardware slump

3 hours ago

IBM's first-quarter earnings fell and revenue came in below Wall Street's expectations amid an ongoing decline in its hardware business, one that was exasperated by weaker demand in China and emerging markets.

Microsoft CEO is driving data-culture mindset

4 hours ago

(Phys.org) —Microsoft's future strategy: is all about leveraging data, from different sources, coming together using one cohesive Microsoft architecture. Microsoft CEO Satya Nadella on Tuesday, both in ...

User comments : 0

More news stories

Simplicity is key to co-operative robots

A way of making hundreds—or even thousands—of tiny robots cluster to carry out tasks without using any memory or processing power has been developed by engineers at the University of Sheffield, UK.

Microsoft CEO is driving data-culture mindset

(Phys.org) —Microsoft's future strategy: is all about leveraging data, from different sources, coming together using one cohesive Microsoft architecture. Microsoft CEO Satya Nadella on Tuesday, both in ...

IBM posts lower 1Q earnings amid hardware slump

IBM's first-quarter earnings fell and revenue came in below Wall Street's expectations amid an ongoing decline in its hardware business, one that was exasperated by weaker demand in China and emerging markets.

Down's chromosome cause genome-wide disruption

The extra copy of Chromosome 21 that causes Down's syndrome throws a spanner into the workings of all the other chromosomes as well, said a study published Wednesday that surprised its authors.