Researchers solve scaling challenge for multi-core chips

April 16, 2012, Semiconductor Research Corporation

Researchers sponsored by Semiconductor Research Corporation (SRC), the world's leading university-research consortium for semiconductors and related technologies, today announced that they have identified a path to overcome challenges for scaling multi-core semiconductors by successfully addressing how to scale memory communications among the cores. The results can lead to continued design of ever-smaller integrated circuits (ICs) into computer hardware without expensive writing of all new software from scratch to accommodate the increased capabilities.

Today’s announcement involves researchers Professor Daniel Sorin from Duke University, Professor Milo M.K. Martin from University of Pennsylvania and Professor Mark D. Hill from University of Wisconsin. The SRC-guided research significantly extends the path for cores to communicate by reading and writing to a shared space – known as cache-coherent shared memory. In each , one or more caches hold the subset of memory locations that most recently have been written and read by the core.

Cache coherence protocols are built into hardware in order to guarantee that each cache and memory controller can access shared data at high performance. As computational demands on the cores increase, so do concerns that the protocol will be slow or energy-inefficient when there are multiple cores.

“We have refuted calls for a radical design change by showing that, using already existing techniques, we can create cache coherence protocols that scale to hundreds and perhaps even thousands of cores,” said Sorin.

“Our results allow us to confidently predict that, with these new protocols, on-chip coherence is here to stay. Computer systems don’t need to abandon current compatibilities to accommodate even hundreds of cores,” Sorin added. “Chip area and energy consumption may limit future multi-core chips, but our research refutes conventional wisdom that multi-core scalability of the memory system would be the primary scaling bottleneck.”

The alleged lack of scalability of coherence is attributed to the poor scaling of the storage and traffic on the interconnection network that coherence requires, as well as concerns about latency and energy needs. A popular expectation among industry has projected that future multi-core chips will no longer be able to rely on coherence, but instead will communicate with software-managed coherence or message passing that does not share memory. For the past few years, high costs estimated for support of those alternatives have registered growing concern among manufacturers.

The solution described by the research brings together a combination of identified techniques for creation of shared caches augmented to track cached copies, explicit cache eviction notifications and hierarchical design. Scalability analysis of this design confirms that shared memory among multiple cores and its vital benefits for future computational increases can allow a broad range of technologies and industries to maintain their reliance on more powerful, cost-effective roadmaps.

“Chipmakers are not operating in a vacuum and must continue to identify how they’ll enable their partners on the hardware side,” said SRC Executive Vice President Steven Hillenius. “As we collectively grapple with how to keep costs low and performance high for the next generation of computational technologies, our announcement today is that one of the key problems for scaling can be solved.”

This news means that not only will the computer industry be able to avoid radically changing the programming paradigm from the mainstream technique of cache-coherent shared memory, but the solution developed by Sorin and his colleagues also facilitates backward compatibility with the vast amount of legacy code written for cache-coherent shared . Thus, as the industry plans for the future, it gains a path for scalability without requiring all new software.

Explore further: New bandwidth management techniques boost operating efficiency in multi-core chips

Related Stories

New hardware boosts communication speed on multi-core chips

January 31, 2011

Computer engineers at North Carolina State University have developed hardware that allows programs to operate more efficiently by significantly boosting the speed at which the "cores" on a computer chip communicate with each ...

Designing the hardware

February 23, 2011

Computer chips' clocks have stopped getting faster. To maintain the regular doubling of computer power that we now take for granted, chip makers have been giving chips more "cores," or processing units. But how to distribute ...

The next operating system

February 24, 2011

At the most basic level, a computer is something that receives zeroes and ones from either memory or an input device — like a keyboard — combines them in some systematic way, and ships the results off to either ...

Recommended for you


Adjust slider to filter visible comments by rank

Display comments: newest first

Apr 16, 2012
This comment has been removed by a moderator.
1.7 / 5 (6) Apr 16, 2012
One really needs a chart or graphic to make sense of this, English fails at this level of descriptive logic and rather badly...
5 / 5 (1) Apr 16, 2012
The article only states that they have developed a concept and does not attempt to provide any hint of how. Rather annoying.
5 / 5 (1) Apr 17, 2012
Very happy news. This will save everybody a lot of time and money.
1 / 5 (1) May 19, 2012
While I agree with the above comments, I hope this is article is spot on in its generalities...this would dictate a quantum leap in short term performance....won't hold my breath though, will believe it when I see it.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.