Professor works to overcome challenges in harnessing power of multicore computer processors

Jan 15, 2013 by Karen B. Roberts
UD's John Cavazos is working to overcome challenges in harnessing the power of multicore computer processors. Credit: Lane McLaughlin

(Phys.org)—Computer processors that can complete multiple tasks simultaneously have been available in the mainstream for almost a decade. In fact, almost all processors developed today are multicore processors. Yet, computer programmers still struggle to efficiently harness their power because it is difficult to write correct and efficient parallel code.

According to the University of Delaware's John Cavazos, to effectively exploit the power of multi-core processors, programs must be structured as a collection of independent tasks where separate tasks are executed on independent cores.

The complexity of modern software, however, makes this difficult.

Now Cavazos, assistant professor of computer and , is attempting to invent new algorithms and tools for parallelization of large-scale programs as principal investigator (PI) of a new National Science Foundation (NSF) grant.

The work, funded through a $497,791 grant from the Division of Computer and Communication Foundations, is a with Michael Spear, assistant professor of computer science and engineering at Lehigh University. It involves using a novel combination of automatic and profile-driven techniques to address fundamental issues in creating parallel programs.

Compilers translate applications written by into machine code that executes on a computer. One of their important tasks is to help applications to run more efficiently through parallelization. However, while a variety of parallelization techniques are available, such as automatic parallelization and speculative parallelization, no one technique is applicable to all programs.

Cavazos will use machine learning to create a system that enables a compiler to analyze a program, assess which parallelization technique is suitable, and then automatically apply that technique to the program. He is also investigating how to enable the program, through learning, to adapt its behavior based on outside inputs and environments.

The project extends research he began in 2010, when he received an NSF Faculty Early Career Development Award to develop adaptive compilers for multi-core computer environments. It also complements his recent work with the Defense Advanced Research Projects Agency to construct an extreme-scale software framework capable of automatically partitioning and mapping application code to a multi-core system and generating "SMART" applications that can reconfigure underlying hardware to save power.

"This research will enable a greater percentage of programs to benefit from by providing feedback to programmers so that they can improve the code, and by integrating adaptability so that a broader range of programs can achieve increased speed," Cavazos said.

As the project progresses, Cavazos plans to develop new algorithms and tools for speculative parallelization, a technique that allows shared-memory systems to execute certain loops in parallel; loops that a compiler cannot otherwise determine as parallelizable. Ultimately, he hopes to distribute the new prototypes and source codes as open-source software.

A key educational initiative includes training graduate students and integrating performance-related topics into his classroom instruction.

"Helping students to learn the fundamentals of creating that is both correct and efficient needs to be a major educational goal in our science departments," said Cavazos.

Explore further: Oculus unveils new prototype VR headset

add to favorites email to friend print save as pdf

Related Stories

KIT: Fast and easy programming

Feb 01, 2012

An increasing number of electronics products such as smartphones are equipped with fast, energy-efficient multi-core processors. As a matter of fact, however, programming of the respective applications is ...

New software design technique allows programs to run faster

Apr 05, 2010

(PhysOrg.com) -- Researchers at North Carolina State University have developed a new approach to software development that will allow common computer programs to run up to 20 percent faster and possibly incorporate new security ...

Recommended for you

Artificial intelligence that imitates children's learning

8 hours ago

The computer programmes used in the field of artificial intelligence (AI) are highly specialised. They can for example fly airplanes, play chess or assemble cars in controlled industrial environments. However, a research ...

Oculus unveils new prototype VR headset

Sep 20, 2014

Oculus has unveiled a new prototype of its virtual reality headset. However, the VR company still isn't ready to release a consumer edition.

Who drives Alibaba's Taobao traffic—buyers or sellers?

Sep 18, 2014

As Chinese e-commerce firm Alibaba prepares for what could be the biggest IPO in history, University of Michigan professor Puneet Manchanda dug into its Taobao website data to help solve a lingering chicken-and-egg question.

Computerized emotion detector

Sep 16, 2014

Face recognition software measures various parameters in a mug shot, such as the distance between the person's eyes, the height from lip to top of their nose and various other metrics and then compares it with photos of people ...

User comments : 0