Cystorm supercomputer unleashes 28.16 trillion calculations per second

Aug 21, 2009
Srinivas Aluru, left, and Steve Nystrom have worked for months to connect cables and cooling hoses and otherwise get Iowa State University's second supercomputer up to speed. Credit: Photo by Bob Elbert/Iowa State University

Srinivas Aluru recently stepped between the two rows of six tall metal racks, opened up the silver doors and showed off the 3,200 computer processor cores that power Cystorm, Iowa State University's second supercomputer.

And there's a lot of raw power in those racks.

Cystorm, a machine, boasts a peak performance of 28.16 trillion calculations per second. That's five times the peak of CyBlue, an IBM Blue Gene/L that's been on campus since early 2006 and uses 2,048 processors to do 5.7 trillion calculations per second.

Aluru, the Ross Martin Mehl and Marylyne Munas Mehl Professor of Computer Engineering and the leader of the Cystorm project, said the new machine also scores high on a more realistic test of a supercomputer's actual performance: 15.44 trillion calculations per second compared to CyBlue's 4.7 trillion per second. That measure makes Cystorm 3.3 times more powerful than CyBlue.

Those performance numbers, however, do not earn Cystorm a spot on the TOP500 list of the world's fastest supercomputers. (When CyBlue went online three years ago, it was the 99th most powerful supercomputer on the list.)

"Cystorm is going to be very good for data-intensive research projects," Aluru said. "The capabilities of Cystorm will help Iowa State researchers do new, pioneering research in their fields."

The supercomputer is targeted for work in materials science, power systems and systems biology.

Aluru said materials scientists will use the supercomputer to analyze data from the university's Local Electrode Atom Probe microscope, an instrument that can gather data and produce images at the atomic scale of billionths of a meter. Systems biologists will use the supercomputer to build gene networks that will help researchers understand how thousands of genes interact with each other. Power systems researchers will use the supercomputer to study the security, reliability and efficiency of the country's energy infrastructure. And computer engineers will use the supercomputer to build a software infrastructure that helps users make decisions by identifying relevant information sources.

"These research efforts will lead to significant advances in the penetration of high performance computing technology," says a summary of the Cystorm project. "The project will bring together multiple departments and research centers at Iowa State University and further enrich interdisciplinary culture and training opportunities."

Joining Aluru on the Cystorm project are five Iowa State researchers: Maneesha Aluru, an associate scientist in electrical and computer engineering and genetics, development and cell biology; Baskar Ganapathysubramanian, an assistant professor and William March Scholar in Mechanical Engineering; James McCalley, the Harpole Professor in Electrical Engineering; Krishna Rajan, a professor of materials science and engineering; and Arun Somani, an Anson Marston Distinguished Professor in Engineering and Jerry R. Junkins Endowed Chair of electrical and computer engineering. Steve Nystrom, a systems support specialist for the department of electrical and computer engineering, is the system administrator for Cystorm.

The researchers purchased the computer with a $719,000 grant from the National Science Foundation, $400,000 from Iowa State colleges, departments and researchers, and a $200,000 equipment donation from Sun Microsystems.

Because of Cystorm, the computer company will designate Iowa State a Sun Microsystems Center of Excellence for Engineering Informatics and Systems Biology.

While Cystorm is much more powerful than CyBlue, Aluru said Iowa State's first supercomputer will still be used by researchers across campus.

"CyBlue will still be around," Aluru said. "Researchers will use both systems to solve problems. Both systems enhance the research capabilities of Iowa State."

Source: Iowa State University (news : web)

Explore further: DESY and IBM develop big data architecture for science

add to favorites email to friend print save as pdf

Related Stories

Securing America's power grid

Jun 27, 2006

Terrorists attack Colombia's electrical grid hundreds of times a year. What's to stop attacks on America's power lines? An Iowa State University research team led by Arun Somani, chair and Jerry R. Junkins professor of electrical ...

New computer does Windows 3,000 times faster

Jun 18, 2008

The most powerful Windows-based computer in Europe is being installed in Sweden's Umeå University. Nicknamed "Akka", the supercomputer incorporates IBM Power microprocessors, Cell Broadband Engines and Intel processors and ...

IBM Claims Its BlueGene Supercomputer Is the Fastest

Sep 30, 2004

IBM Corp. on Wednesday said it has developed the world's fastest computer – a 16,000-processor version of its BlueGene/L supercomputer. BlueGene was able to achieve a sustained performance of 36.01 TFLOPS, ove ...

Recommended for you

Fitbit to Schumer: We don't sell personal data

1 hour ago

The maker of a popular line of wearable fitness-tracking devices says it has never sold personal data to advertisers, contrary to concerns raised by U.S. Sen. Charles Schumer.

C2D2 fighting corrosion

2 hours ago

Bridges become an infrastructure problem as they get older, as de-icing salt and carbon dioxide gradually destroy the reinforced concrete. A new robot can now check the condition of these structures, even ...

Should you be worried about paid editors on Wikipedia?

6 hours ago

Whether you trust it or ignore it, Wikipedia is one of the most popular websites in the world and accessed by millions of people every day. So would you trust it any more (or even less) if you knew people ...

User comments : 8

Adjust slider to filter visible comments by rank

Display comments: newest first

Alexa
4.7 / 5 (3) Aug 21, 2009
15.44 trillion calculations per second (15.44 Tflops) may sound impressive - but PC-sized TESLA system from Nvidia Corp. handles 4 Tflops for fraction of investment and maintenance price (single TESLA unit consumes 1.2 kWh of energy).
makotech222
not rated yet Aug 21, 2009
probably true, but i think teslas are built in a different way. supercomputers arent anything like conventional computers programming-wise.
Soylent
5 / 5 (1) Aug 22, 2009
There's nothing magic about stream processing like TESLA(which is essentially a GPU). Any such system sacrifices the ability to handle branchy code with complex memory access patterns and low latency sequential processing for high-throughput simplistic number crunching.

Raytracing: CPU. Rasterization: GPU. Databases: CPU. FFT: GPU. etc.
Bob_Kob
not rated yet Aug 22, 2009
So what happens when we have computers thousands of times faster than this? What are we doing with all these petaflops?
CptWozza
not rated yet Aug 22, 2009
So what happens when we have computers thousands of times faster than this? What are we doing with all these petaflops?


Are you suggesting that we could never make use of the extra flops? Try telling that to a systems biologist, surface scientist or lattice qcd researcher and see what response you get. A thousand times faster is scarcely enough.
Soylent
not rated yet Aug 23, 2009
So what happens when we have computers thousands of times faster than this?


We lament the lack of exaflop computers.

What are we doing with all these petaflops?


There's no lack of numbers to crunch.

You can fold proteins, you can study likely drug targets for cancer, you can simulate nuclear weapons and try to maintain an aging stockpile without underground testing, you can simulate nuclear fusion reactors, you can simulate climate change, you can study a couple of dozen water molecules accurately interacting with a surface, you could simulate a bumblebee brain, you can simulate the spread of disease through an accurate representation of human population, you can try to artificially evolve a more efficient airplane wing, you can simulate the fluid dynamics of combustion in an internal combustion engine without making lots of simplifying assumptions etc.
jcrow
not rated yet Aug 23, 2009
GPUs have a much more limited instruction set than x86 type processors. They are designed to process floating point data in very specific ways.
In the future there will be parts of certain CPUs that are built for processing graphics such as AMD Fusion or Intel Larrabee chips. CPUs will become massively parallel.
For example a graphics card may have 800 processing units where a typical CPU has about 4. The only thing holding this technology back in mainstream computers is the challenge of programming parallel software. Most software does not know how to use 4 processors. Once good libraries are in place to aid developers we will have access to some incredible power in a home PC. OpenCL looks like a good starting place.
probes
not rated yet Aug 24, 2009
You may suggest also, whilst on the subject of OpenCL, things pertaining to my peterflops. Thankyou