Software tool helps tap into the power of graphics processing

May 17, 2010

Today's computers rely on powerful graphics processing units (GPUs) to create the spectacular graphics in video games. In fact, these GPUs are now more powerful than the traditional central processing units (CPUs) - or brains of the computer. As a result, computer developers are trying to tap into the power of these GPUs. Now a research team from North Carolina State University has developed software that could make it easier for traditional software programs to take advantage of the powerful GPUs, essentially increasing complex computing brainpower

Taking advantage of a GPU's processing ability is a big deal, because of the amount of a contains. The CPU from an average computer has about 10 gigaflops of computing power - or 10 billion operations per second. That sounds like a lot until you consider that the GPU from an average modern computer has 1 teraflop of computing power - which is 1 trillion operations per second.

But using a GPU for general computing functions isn't easy. The actual architecture of the GPU itself is designed to process graphics, not other applications. Because GPUs focus on turning data into millions of pixels on a screen, the architecture is designed to have many operations taking place in isolation from each other. The operation telling one pixel what to do is separate from the operations telling other pixels what to do. This hardware design makes graphics processing more efficient, but presents a stumbling block for those who want to use GPUs for more complex computing processes.

A research team from NC state has developed software that could make it easier for traditional software programs to take advantage of GPUs. The research was funded by the National Science Foundation.

"We have developed a that takes computer program A and translates it into B - which ultimately does the same thing program A does, but does it more efficiently on a GPU," says Dr. Huiyang Zhou, an associate professor of electrical and computer engineering at NC State and co-author of a paper describing the research. This sort of translation tool is called a compiler.

Program A, which is the user-provided input, is called a "naďve" version - it doesn't consider GPU optimization, but focuses on providing a clear series of commands that tell the computer what to do. Zhou's compiler software takes the naďve version and translates it into a program that can effectively utilize the GPU's hardware so that the program operates a lot more quickly.

Zhou's research team tested a series of standard programs to determine whether programs translated by their compiler software actually operated more efficiently than code that had been manually optimized for GPU use by leading GPU developers. Their results showed that programs translated by their compiler software ran approximately 30 percent more quickly than those optimized by the GPU developers.

"Tapping into your GPU can turn your personal computer into a supercomputer," Zhou says.

Explore further: Programming tools facilitate use of video game processors for defense needs

More information: The paper, "A GPGPU Compiler for Memory Optimization and Parallelism Management," was co-authored by Zhou, NC State Ph.D. student Yi Yang, and University of Central Florida Ph.D. students Ping Xiang and Jingfei Kong. The paper will be presented June 7 at the Programming Language Design and Implementation conference in Toronto.

Related Stories

New computer cluster gets its grunt from games

November 25, 2009

Technology designed to blast aliens in computer games is part of a new GPU (Graphics Processing Units) computer cluster that will process CSIRO research data thousands of times faster and more efficiently than a desktop PC.

NVIDIA Ushers In the Era of Personal Supercomputing

June 21, 2007

High-performance computing in fields like the geosciences, molecular biology, and medical diagnostics enable discoveries that transform billions of lives every day. Universities, research institutions, and companies in these ...

NVIDIA Introduces New Integrated GeForce 9400M GPU

October 15, 2008

Demand for better visual computing performance continues to grow as more and more applications tap the massively parallel processing power of the graphics processing unit (GPU) for more than just graphics. As gamers, video ...

Recommended for you

Volumetric 3-D printing builds on need for speed

December 11, 2017

While additive manufacturing (AM), commonly known as 3-D printing, is enabling engineers and scientists to build parts in configurations and designs never before possible, the impact of the technology has been limited by ...

Tech titans ramp up tools to win over children

December 10, 2017

From smartphone messaging tailored for tikes to computers for classrooms, technology titans are weaving their way into childhoods to form lifelong bonds, raising hackles of advocacy groups.


Adjust slider to filter visible comments by rank

Display comments: newest first

5 / 5 (3) May 17, 2010
"or brains of the computer"

I don't know about your anatomy but my brain also holds my short term memory (RAM), my long term memory (HD), the platform between all centrals (MB), the communication between them (chipset), reptilian complex (BIOS), visual cortex (GPU), primary auditory cortex (sound card), etc.

My point is, the whole computer is a "brain". The only things that are not are mostly devices externally connected to it.
not rated yet May 17, 2010
This isn't really new. PS3's have been using their GPUs for distributed computing (folding@home) for quite a while (years).

There are several other distributed computing projects that use the power of GPUs to advance science and technology research. Einstein@home is one of them (that I personally contribute to), and there are several others.

The good news is that as more reserchers work to develop GPU-based computing, the faster and more effecient our computers can become.
not rated yet May 17, 2010
Yes, it's actually really new. As opposed to Folding@Home and other scientific programs which had to have separate versions written especially for GPUs, this paper suggests a general-purpose compiler that could transform ANY piece of software into something that runs on a GPU and does its job faster.

Pay attention.
May 17, 2010
This comment has been removed by a moderator.
not rated yet May 17, 2010
I've been using GLSL to do audio processing for demoscene projects, GPU's iz bom. :]
not rated yet May 17, 2010
With OpenCL on the way, such translation tools won't be necessary.

not rated yet May 17, 2010
I wonder how efficiently this could run in terms of electrical power?

I would like to see an extremely low power net-book that had all of the processing speed of a high end desktop.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.