Research team 'virtualizes' supercomputer

Jan 20, 2010

A collaboration between researchers at Northwestern University, Sandia National Labs and the University of New Mexico has resulted in the largest-scale study ever done on what many consider an important part of the future of computing -- the virtualization of parallel supercomputing systems.

As part of this collaboration, Peter A. Dinda, associate professor of electrical engineering and computer science at Northwestern's McCormick School of Engineering and Applied Science, and his graduate student Jack Lange led the development of a monitor called Palacios specifically for supercomputers. The system was tested Dec. 3 on Sandia's world-class Red Storm supercomputer. Sandia researchers, led by Kevin Pedretti, assisted in adapting and optimizing Palacios for the Red Storm environment and directed the testing effort.

Results show that the team successfully virtualized Red Storm using the Palacios virtual machine monitor and ran communication intensive, fine-grain parallel benchmarks of critical interest to Sandia with extremely high performance. Testing went up to 4,096 nodes, making this the largest-scale study by at least two orders of magnitude.

"Virtualizing a parallel supercomputer is particularly challenging because of the need to support extremely low latency, high-bandwidth communication among thousands of virtual machines," Dinda says. "Supercomputing users and the owners of supercomputers will not tolerate any performance compromises because the machines are so expensive to acquire and maintain, but, on the other hand, they also want access to the benefits of virtualization."

A virtual machine monitor (VMM) works by separating a computer's from its hardware. This indirection exposes a range of benefits. For example, a VMM allows an operating system from one machine to be run on another. (If it needs more memory, for example.) It can also allow one machine to simultaneously run multiple operating systems, and it is possible to migrate running operating systems from one computer to another.

In the case of supercomputing, the VMM also acts as a translator between a user's software and the highly specialized hardware and software environments of the system, which could potentially allow more researchers to use supercomputers to solve complex problems.

With more than 38,000 processors, Red Storm is a massively parallel processing supercomputer that was uniquely designed to support modeling and simulation of complex problems in nuclear weapons stockpile stewardship. It is currently the 17th fastest computer in the world, with a theoretical peak performance of 284 trillion floating point operations per second in a relatively compact 3,500-square foot footprint.

Virtualization on such a machine is important because it will allow more researchers to run scientific computing and simulation programs without reconfiguring their software to the machine's specific hardware and software environments. In this context, thousands of virtual machines must cooperate in order to solve large problems. But because the system is extremely expensive to run, any VMM must have low overhead, which is magnified through the fine-grain interactions among the virtual machines.

At these massive scales, the Palacios virtual machine monitor had a measured overhead of less than 5 percent. The results clearly indicate that it is possible to bring the benefits of virtualization to even the largest computers in the world without performance compromises.

Virtualization is big business, with the market research and analysis firm IDC forecasting annual revenues to grow from $5.5 billion in 2007 to $11.7 billion in 2011.

"If we can virtualize supercomputers without performance compromises, we will make them easier to use and easier to manage, generally increasing the utility of these very large national infrastructure investments," Dinda says.

"The end goal is to provide a more flexible supercomputer environment to end users without sacrificing performance," Pedretti says. "The successful experiments with Palacios on Red Storm demonstrate the feasibility of our approach, and we hope to incorporate this technology in future capability supercomputer platforms."

Explore further: Researchers reverse-engineering China's online censorship methods reveal government's deepest concerns

More information: Researchers can learn more about Palacios and download the latest version of Palacios at v3vee.org .

add to favorites email to friend print save as pdf

Related Stories

New Supercomputing Center in Pittsburgh

Sep 29, 2004

The U.S. academic community will soon have access to a new supercomputer modeled on the highest-performance systems currently being built in the United States, through a $9.7 million award to the Pittsburgh Supercomputin ...

The world's smallest , yet fastest supercomputer to be built

Jul 27, 2004

Red Storm will be faster, yet smaller and less expensive, than previous supercomputers, say researchers at the National Nuclear Security Administration’s Sandia National Laboratories, where the machine will be assembled. ...

Recommended for you

Enabling a new future for cloud computing

19 hours ago

The National Science Foundation (NSF) today announced two $10 million projects to create cloud computing testbeds—to be called "Chameleon" and "CloudLab"—that will enable the academic research community ...

Hacking Gmail with 92 percent success

Aug 21, 2014

(Phys.org) —A team of researchers, including an assistant professor at the University of California, Riverside Bourns College of Engineering, have identified a weakness believed to exist in Android, Windows ...

User comments : 0