Meeting consumers' HD demands with a faster algorithm

July 7, 2010 by Kurt Pfitzer

( -- Engineers help smaller processors outperform a single superfast processor by working more efficiently in parallel.

In their insatiable demand for faster, smaller and more gadgets, says Zhiyuan Yan, consumers are straining the capacity of information technology.

that offer high-definition (HD) images and portability—such as cell phones with cameras and Internet access—require high throughput, or processing speed, says Yan, an assistant professor of electrical and . To be portable and small, the devices must also be able to operate with little power.

These two trends—greater performance and stringent power requirements—pose challenges to the technology and the mathematical equations on which the IT revolution has relied.

“HD applications require high throughput and low power in order to be handheld and mobile,” says Yan. “When you take these two together, you’re in a sense burning the candle at both ends.”

Yan and Meghanad Wagh, an associate professor of electrical and computer engineering, recently received a three-year grant from the National Science Foundation to devise scalable bilinear algorithms to help meet these challenges.

“This project represents a different way of thinking about algorithms,” says Wagh. “Normally in signal processing, you write algorithms meant for standard computer architectures. But to achieve the required speed, we have developed an entirely new class of algorithms that can be directly cast into hardware. Our new algorithms are extremely fast and take advantage of the new trend in technology.”

Helping smaller processors work faster in parallel

The new algorithms are more suitable than traditional algorithms for high-performance computing.

“The IT revolution of the last 20 years has been driven in part by the scaling of CMOS [complementary metal-oxide semiconductor] technology,” says Yan. “We can pack more information in smaller chips, but the more we do this, the closer we approach the limits of technology.

“Until a few years ago, to improve computer speed, you used a larger processor. Now, you split the computation and do it in parallel with many smaller processors. Many less-powerful processors can be just as good as or better than one superfast processor, while consuming less power.

“The trouble is that a lot of traditional signal-processing algorithms are not parallelizable and therefore cannot take advantage of these new ideas.”

Parallel processing, say Yan and Wagh, allows separate applications of a multimedia system to be dedicated to specific processors. Video games might use one processor for graphics, a second for manipulating objects on the screen, and a third for the remaining system computations. This helps prevent overloading a single with competing demands.

“Our algorithms are inherently structured,” says Yan. “This enables us to extract the maximum parallelism in processing and to offload tasks to dedicated hardware.”

Another advantage of the Lehigh researchers’ algorithms is that they can be scaled to handle the greater level of complexity required by computationally intensive jobs.

“Scaling gracefully to handle size and complexity”

“Our algorithms scale gracefully and deal easily with size and complexity,” says Yan. “The earlier algorithms worked fine for small problems, but problems have become more complex. Without algorithms like ours, this complexity would overwhelm processors.

“Our goal is to make it possible for information technologies to continue to improve at the rate that consumers are accustomed to.”

Wagh and Yan have published articles in the top journals of their field, including IEEE Transactions on Signal Processing, IEEE Signal Process Letters, and Elsevier’s . One of Yan’s students and two of Wagh’s have earned Ph.D.s in this area.

Explore further: Quantum Computer Science on the Internet

Related Stories

Quantum Computer Science on the Internet

July 31, 2004

A simulated quantum computer went online on the Internet last month. With the ability to control 31 quantum bits, it is the most powerful of its type in the world. Software engineers can use it to test algorithms that might ...

New 167-processor chip is super-fast, ultra energy-efficient

April 22, 2009

A new, extremely energy-efficient processor chip that provides breakthrough speeds for a variety of computing tasks has been designed by a group at the University of California, Davis. The chip, dubbed AsAP, is ultra-small, ...

Recommended for you

Inferring urban travel patterns from cellphone data

August 29, 2016

In making decisions about infrastructure development and resource allocation, city planners rely on models of how people move through their cities, on foot, in cars, and on public transportation. Those models are largely ...

How machine learning can help with voice disorders

August 29, 2016

There's no human instinct more basic than speech, and yet, for many people, talking can be taxing. 1 in 14 working-age Americans suffer from voice disorders that are often associated with abnormal vocal behaviors - some of ...

Sponge creates steam using ambient sunlight

August 22, 2016

How do you boil water? Eschewing the traditional kettle and flame, MIT engineers have invented a bubble-wrapped, sponge-like device that soaks up natural sunlight and heats water to boiling temperatures, generating steam ...


Adjust slider to filter visible comments by rank

Display comments: newest first

5 / 5 (1) Jul 07, 2010
Uh, besides being multi-threaded (which is what most of us "modern" software developers write these days), what's so special about their "algorithms"? This story has no information in it whatsoever.
not rated yet Jul 08, 2010
Uh, besides being multi-threaded (which is what most of us "modern" software developers write these days), what's so special about their "algorithms"? This story has no information in it whatsoever.
Apparently, their algorithms are "bilinear" (*snort*). I guess that means they found a way to massively parallelize bilinear filtering -- an already embarrassingly parallel problem...
not rated yet Jul 08, 2010
This is only news because it affects a rapidly growing consumer market. This stuff happens all the time.

Something interesting to note is a shift back to performance tuning for speed and size. CPU cycles and memory got cheap so we got sloppy, but now these smaller devices seem to be pushing us back towards making good fast code.

Or at least that's my take on this.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.