Meeting consumers' HD demands with a faster algorithm

Jul 07, 2010 by Kurt Pfitzer

(PhysOrg.com) -- Engineers help smaller processors outperform a single superfast processor by working more efficiently in parallel.

In their insatiable demand for faster, smaller and more gadgets, says Zhiyuan Yan, consumers are straining the capacity of information technology.

that offer high-definition (HD) images and portability—such as cell phones with cameras and Internet access—require high throughput, or processing speed, says Yan, an assistant professor of electrical and . To be portable and small, the devices must also be able to operate with little power.

These two trends—greater performance and stringent power requirements—pose challenges to the technology and the mathematical equations on which the IT revolution has relied.

“HD applications require high throughput and low power in order to be handheld and mobile,” says Yan. “When you take these two together, you’re in a sense burning the candle at both ends.”

Yan and Meghanad Wagh, an associate professor of electrical and computer engineering, recently received a three-year grant from the National Science Foundation to devise scalable bilinear algorithms to help meet these challenges.

“This project represents a different way of thinking about algorithms,” says Wagh. “Normally in signal processing, you write algorithms meant for standard computer architectures. But to achieve the required speed, we have developed an entirely new class of algorithms that can be directly cast into hardware. Our new algorithms are extremely fast and take advantage of the new trend in technology.”

Helping smaller processors work faster in parallel

The new algorithms are more suitable than traditional algorithms for high-performance computing.

“The IT revolution of the last 20 years has been driven in part by the scaling of CMOS [complementary metal-oxide semiconductor] technology,” says Yan. “We can pack more information in smaller chips, but the more we do this, the closer we approach the limits of technology.

“Until a few years ago, to improve computer speed, you used a larger processor. Now, you split the computation and do it in parallel with many smaller processors. Many less-powerful processors can be just as good as or better than one superfast processor, while consuming less power.

“The trouble is that a lot of traditional signal-processing algorithms are not parallelizable and therefore cannot take advantage of these new ideas.”

Parallel processing, say Yan and Wagh, allows separate applications of a multimedia system to be dedicated to specific processors. Video games might use one processor for graphics, a second for manipulating objects on the screen, and a third for the remaining system computations. This helps prevent overloading a single with competing demands.

“Our algorithms are inherently structured,” says Yan. “This enables us to extract the maximum parallelism in processing and to offload tasks to dedicated hardware.”

Another advantage of the Lehigh researchers’ algorithms is that they can be scaled to handle the greater level of complexity required by computationally intensive jobs.

“Scaling gracefully to handle size and complexity”

“Our algorithms scale gracefully and deal easily with size and complexity,” says Yan. “The earlier algorithms worked fine for small problems, but problems have become more complex. Without algorithms like ours, this complexity would overwhelm processors.

“Our goal is to make it possible for information technologies to continue to improve at the rate that consumers are accustomed to.”

Wagh and Yan have published articles in the top journals of their field, including IEEE Transactions on Signal Processing, IEEE Signal Process Letters, and Elsevier’s . One of Yan’s students and two of Wagh’s have earned Ph.D.s in this area.

Explore further: Forging a photo is easy, but how do you spot a fake?

Related Stories

New 167-processor chip is super-fast, ultra energy-efficient

Apr 22, 2009

A new, extremely energy-efficient processor chip that provides breakthrough speeds for a variety of computing tasks has been designed by a group at the University of California, Davis. The chip, dubbed AsAP, is ultra-small, ...

Quantum Computer Science on the Internet

Jul 31, 2004

A simulated quantum computer went online on the Internet last month. With the ability to control 31 quantum bits, it is the most powerful of its type in the world. Software engineers can use it to test algorithms that might o ...

Recommended for you

Forging a photo is easy, but how do you spot a fake?

Nov 21, 2014

Faking photographs is not a new phenomenon. The Cottingley Fairies seemed convincing to some in 1917, just as the images recently broadcast on Russian television, purporting to be satellite images showin ...

Algorithm, not live committee, performs author ranking

Nov 21, 2014

Thousands of authors' works enter the public domain each year, but only a small number of them end up being widely available. So how to choose the ones taking center-stage? And how well can a machine-learning ...

Professor proposes alternative to 'Turing Test'

Nov 19, 2014

(Phys.org) —A Georgia Tech professor is offering an alternative to the celebrated "Turing Test" to determine whether a machine or computer program exhibits human-level intelligence. The Turing Test - originally ...

Image descriptions from computers show gains

Nov 18, 2014

"Man in black shirt is playing guitar." "Man in blue wetsuit is surfing on wave." "Black and white dog jumps over bar." The picture captions were not written by humans but through software capable of accurately ...

Converting data into knowledge

Nov 17, 2014

When a movie-streaming service recommends a new film you might like, sometimes that recommendation becomes a new favorite; other times, the computer's suggestion really misses the mark. Yisong Yue, assistant ...

User comments : 3

Adjust slider to filter visible comments by rank

Display comments: newest first

CSharpner
5 / 5 (1) Jul 07, 2010
Uh, besides being multi-threaded (which is what most of us "modern" software developers write these days), what's so special about their "algorithms"? This story has no information in it whatsoever.
PinkElephant
not rated yet Jul 08, 2010
Uh, besides being multi-threaded (which is what most of us "modern" software developers write these days), what's so special about their "algorithms"? This story has no information in it whatsoever.
Apparently, their algorithms are "bilinear" (*snort*). I guess that means they found a way to massively parallelize bilinear filtering -- an already embarrassingly parallel problem...
Adriab
not rated yet Jul 08, 2010
This is only news because it affects a rapidly growing consumer market. This stuff happens all the time.

Something interesting to note is a shift back to performance tuning for speed and size. CPU cycles and memory got cheap so we got sloppy, but now these smaller devices seem to be pushing us back towards making good fast code.

Or at least that's my take on this.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.