Software tool helps tap into the power of graphics processing

May 17, 2010

Today's computers rely on powerful graphics processing units (GPUs) to create the spectacular graphics in video games. In fact, these GPUs are now more powerful than the traditional central processing units (CPUs) - or brains of the computer. As a result, computer developers are trying to tap into the power of these GPUs. Now a research team from North Carolina State University has developed software that could make it easier for traditional software programs to take advantage of the powerful GPUs, essentially increasing complex computing brainpower

Taking advantage of a GPU's processing ability is a big deal, because of the amount of a contains. The CPU from an average computer has about 10 gigaflops of computing power - or 10 billion operations per second. That sounds like a lot until you consider that the GPU from an average modern computer has 1 teraflop of computing power - which is 1 trillion operations per second.

But using a GPU for general computing functions isn't easy. The actual architecture of the GPU itself is designed to process graphics, not other applications. Because GPUs focus on turning data into millions of pixels on a screen, the architecture is designed to have many operations taking place in isolation from each other. The operation telling one pixel what to do is separate from the operations telling other pixels what to do. This hardware design makes graphics processing more efficient, but presents a stumbling block for those who want to use GPUs for more complex computing processes.

A research team from NC state has developed software that could make it easier for traditional software programs to take advantage of GPUs. The research was funded by the National Science Foundation.

"We have developed a that takes computer program A and translates it into B - which ultimately does the same thing program A does, but does it more efficiently on a GPU," says Dr. Huiyang Zhou, an associate professor of electrical and computer engineering at NC State and co-author of a paper describing the research. This sort of translation tool is called a compiler.

Program A, which is the user-provided input, is called a "naďve" version - it doesn't consider GPU optimization, but focuses on providing a clear series of commands that tell the computer what to do. Zhou's compiler software takes the naďve version and translates it into a program that can effectively utilize the GPU's hardware so that the program operates a lot more quickly.

Zhou's research team tested a series of standard programs to determine whether programs translated by their compiler software actually operated more efficiently than code that had been manually optimized for GPU use by leading GPU developers. Their results showed that programs translated by their compiler software ran approximately 30 percent more quickly than those optimized by the GPU developers.

"Tapping into your GPU can turn your personal computer into a supercomputer," Zhou says.

Explore further: Computer scientists win a major grant to network mobile devices in the cloud

More information: The paper, "A GPGPU Compiler for Memory Optimization and Parallelism Management," was co-authored by Zhou, NC State Ph.D. student Yi Yang, and University of Central Florida Ph.D. students Ping Xiang and Jingfei Kong. The paper will be presented June 7 at the Programming Language Design and Implementation conference in Toronto.

Related Stories

New computer cluster gets its grunt from games

Nov 25, 2009

Technology designed to blast aliens in computer games is part of a new GPU (Graphics Processing Units) computer cluster that will process CSIRO research data thousands of times faster and more efficiently ...

NVIDIA Ushers In the Era of Personal Supercomputing

Jun 21, 2007

High-performance computing in fields like the geosciences, molecular biology, and medical diagnostics enable discoveries that transform billions of lives every day. Universities, research institutions, and ...

NVIDIA Introduces New Integrated GeForce 9400M GPU

Oct 15, 2008

Demand for better visual computing performance continues to grow as more and more applications tap the massively parallel processing power of the graphics processing unit (GPU) for more than just graphics. ...

Recommended for you

Cutting the cloud computing carbon cost

Sep 12, 2014

Cloud computing involves displacing data storage and processing from the user's computer on to remote servers. It can provide users with more storage space and computing power that they can then access from anywhere in the ...

Teaching computers the nuances of human conversation

Sep 12, 2014

Computer scientists have successfully developed programs to recognize spoken language, as in automated phone systems that respond to voice prompts and voice-activated assistants like Apple's Siri.

Mapping the connections between diverse sets of data

Sep 12, 2014

What is a map? Most often, it's a visual tool used to demonstrate the relationship between multiple places in geographic space. They're useful because you can look at one and very quickly pick up on the general ...

User comments : 6

Adjust slider to filter visible comments by rank

Display comments: newest first

Objectivist
5 / 5 (3) May 17, 2010
"or brains of the computer"

I don't know about your anatomy but my brain also holds my short term memory (RAM), my long term memory (HD), the platform between all centrals (MB), the communication between them (chipset), reptilian complex (BIOS), visual cortex (GPU), primary auditory cortex (sound card), etc.

My point is, the whole computer is a "brain". The only things that are not are mostly devices externally connected to it.
SteveL
not rated yet May 17, 2010
This isn't really new. PS3's have been using their GPUs for distributed computing (folding@home) for quite a while (years).

There are several other distributed computing projects that use the power of GPUs to advance science and technology research. Einstein@home is one of them (that I personally contribute to), and there are several others.

The good news is that as more reserchers work to develop GPU-based computing, the faster and more effecient our computers can become.
donjoe0
not rated yet May 17, 2010
Yes, it's actually really new. As opposed to Folding@Home and other scientific programs which had to have separate versions written especially for GPUs, this paper suggests a general-purpose compiler that could transform ANY piece of software into something that runs on a GPU and does its job faster.

Pay attention.
Alizee
May 17, 2010
This comment has been removed by a moderator.
SincerelyTwo
not rated yet May 17, 2010
I've been using GLSL to do audio processing for demoscene projects, GPU's iz bom. :]
PinkElephant
not rated yet May 17, 2010
With OpenCL on the way, such translation tools won't be necessary.

http://en.wikiped...i/OpenCL
brentrobot
not rated yet May 17, 2010
I wonder how efficiently this could run in terms of electrical power?

I would like to see an extremely low power net-book that had all of the processing speed of a high end desktop.