Scientists squeeze more than 1,000 cores on to computer chip

Jan 04, 2011

(PhysOrg.com) -- Scientists at the University of Glasgow have created an ultra-fast 1,000-core computer processor.

The is the part of a computer’s central processing unit (CPU) which reads and executes instructions. Originally, computers were developed with only one core but, today, processors with two, four or even sixteen cores are commonplace.

However, Dr Wim Vanderbauwhede and colleagues at the University of Massachusetts Lowell have created a processor which effectively contains more than a thousand cores on a single chip.

To do this, the scientists used a chip called a Field Programmable Gate Array (FPGA) which like all microchips contains millions of transistors – the tiny on-off switches which are the foundation of any electronic circuit.

FPGAs can be configured into specific circuits by the user, rather than their function being set at a factory, which enabled Dr Vanderbauwhede to divide up the transistors within the chip into small groups and ask each to perform a different task.

By creating more than 1,000 mini-circuits within the FPGA chip, the researchers effectively turned the chip into a 1,000-core processor – each core working on its own instructions.

The researchers then used the to process an algorithm which is central to the MPEG movie format – used in YouTube videos – at a speed of five gigabytes per second: around 20 times faster than current top-end desktop computers.

Dr Vanderbauwhede said: “FPGAs are not used within standard computers because they are fairly difficult to program, but their processing power is huge while their energy consumption is very small because they are so much quicker – so they are also a greener option.”

While most computers sold today now contain more than one processing core, which allows them to carry out different processes simultaneously, traditional multi-core processors must share access to one memory source, which slows the system down.

The scientists in this research were able to make the processor faster by giving each core a certain amount of dedicated memory.

Dr Vanderbauwhede, who hopes to present his research at the International Symposium on Applied Reconfigurable Computing in March 2011, added: “This is very early proof-of-concept work where we’re trying to demonstrate a convenient way to program FPGAs so that their potential to provide very fast processing power could be used much more widely in future computing and electronics.

“While many existing technologies currently make use of FPGAs, including plasma and LCD televisions and computer network routers, their use in standard desk-top computers is limited.

“However, we are already seeing some microchips which combine traditional CPUs with chips being announced by developers, including Intel and ARM.

I believe these kinds of processors will only become more common and help to speed up computers even further over the next few years.”

Explore further: X-ray detector on plastic delivers medical imaging performance

Provided by University of Glasgow

4.8 /5 (25 votes)

Related Stories

Faster searches key to a greener web

Aug 31, 2009

(PhysOrg.com) -- Faster internet search engine processors could be the key to reducing the environmental impact of the worldwide web, according to scientists at the University of Glasgow.

Research could produce a new class of computer chip

Feb 14, 2007

A new research project at Worcester Polytechnic Institute (WPI) is aimed at developing an entirely new type of reconfigurable computing device, one that combines the speed and power efficiency of custom-designed chips with ...

XILINX RANKS #1 IN FPGA EMBEDDED PROCESSING SOLUTIONS

Aug 10, 2004

Latest CMP Media embedded processor survey reaffirms lead with comprehensive portfolio of hard and soft processor solutions for FPGAs Xilinx, Inc. (NASDAQ: XLNX) today reported independent survey results clearly establishing the Xil ...

Intel Launches Core i7 -- Fastest Processor on the Planet

Nov 18, 2008

Intel Corporation introduced its most advanced desktop processor ever, the Intel Core i7 processor. The Core i7 processor is the first member of a new family of Nehalem processor designs and is the most sophisticated ...

Recommended for you

Quantenna promises 10-gigabit Wi-Fi by next year

1 hour ago

(Phys.org) —Quantenna Communications has announced that it has plans for releasing a chipset that will be capable of delivering 10Gbps WiFi to/from routers, bridges and computers by sometime next year. ...

New US-Spanish firm says targets rich mobile ad market

2 hours ago

Spanish telecoms firm Telefonica and US investment giant Blackstone launched a mobile telephone advertising venture on Wednesday, challenging internet giants such as Google and Facebook in a multi-billion-dollar ...

Environmentally compatible organic solar cells

2 hours ago

Environmentally compatible production methods for organic solar cells from novel materials are in the focus of "MatHero". The new project coordinated by Karlsruhe Institute of Technology (KIT) aims at making ...

Twitter rules out Turkey office amid tax row

2 hours ago

Social networking company Twitter on Wednesday rejected demands from the Turkish government to open an office there, following accusations of tax evasion and a two-week ban on the service.

User comments : 14

Adjust slider to filter visible comments by rank

Display comments: newest first

El_Nose
not rated yet Jan 04, 2011
At first I thought this was the INTEL chip that had a crap load of cores - which would have been old news - but this is new. FPGA's are a unique solution but no one should get the idea that these can replace a CPU - but they signal the reemergence of the co-processor like in the early ninties
Mesafina
5 / 5 (2) Jan 04, 2011
FPGA's are quite interesting. Indeed they are difficult to program, but not so much more difficult then designing a complex circuit board. Parallel processing with recursive inputs and outputs make it easy to create unintended logical side-effects.

Multithreaded programming often addresses this by using mutexes, but the mutex itself is only necessary if your code is not properly designed. This really just means that people will have to think about their software a bit more and do more planning, at least until an abstraction language like java is created to handle and verify integrity for the programmer.

I can definitely see fpga programming being handled in a more automated and emergent manner, possibly even modeled after how neurons in the brain program themselves. Neurons and fpga's do have some striking similarities depending on how you are using the fpga.
Quantum_Conundrum
1.8 / 5 (4) Jan 04, 2011
I can definitely see fpga programming being handled in a more automated and emergent manner, possibly even modeled after how neurons in the brain program themselves. Neurons and fpga's do have some striking similarities depending on how you are using the fpga.

When I was discussing A.I. with FBM, I described a concept of having a multi-core network capable of re-configuring it's own connectivity among the cores facilitated by nano-scale robotic arms to re-work wiring, etc.

Of course, true to form, FBM insisted nothing like this would ever be possible.
bg1
not rated yet Jan 04, 2011
Lots of processors, each with its own memory - sounds like nerve tissue. Maybe this could be used to make an artificial brain.
El_Nose
5 / 5 (1) Jan 04, 2011
anyone see that article on "sloppy mathematics" it is a direct tie in to this one
Eikka
5 / 5 (4) Jan 04, 2011
Unfortunately, an FPGA can't re-configure itself while it is running, so there's a bit of a misunderstanding here. What they have done here is; they've created a circuit inside an FPGA to emulate a 1000 core processor, and then given it a program to run. Simple as that.

The reason why we generally don't use FPGAs for general purpose computing is that you rarely need to re-configure a chip in a product. They're mainly used for research and developement, and one-off special products like in industrial automation where making just one regular ASIC chip would cost too much (because you need to make ten thousand of them to turn a profit).

In a product like a set-top-box, you can take the circuit that was put into the FPGA and remove all the scaffolding required for programmability and turn it into an ASIC. That makes it even faster and even more energy-efficient, and cheaper.

(ASIC= Application Specific Integrated Circuit)
Eikka
5 / 5 (3) Jan 04, 2011
And an FPGA isn't suitable for being re-programmed over and over again in a computer because it relies on flash memory to store the gate configurations.

So you have a limited number of times that you can re-draw the circuit that the chip is emulating. After that it bricks.
Parsec
5 / 5 (3) Jan 04, 2011
Eikka - not all FPGA's use flash memory to store their gate configurations. Flash is limited to about 25k read/write cycles, which isn't usually an important limitation.

FPGA's are cheaper that ASIC's for small to medium quantities (up to runs of about 100k parts). This is because ASIC's cost millions in up front cost, while FPGA's do not. However ASIC's are much cheaper to produce once they are designed, have far more gates, and run up to 10x faster (or 10x less power). So...

FPGA's are perfect for proof of concept. ASIC's are better for mass produced consumer products. This is an example where FPGA's absolutely have the advantage.
TehDog
4 / 5 (1) Jan 04, 2011
Bloody physorg spam filter won't allow links from physorg...
See "The surprising usefulness of sloppy arithmetic"
And the post that I made several hours ago (awaiting moderation for that link) was essentially Parsec's points, in somewhat less detail :)
Eikka
4.5 / 5 (2) Jan 04, 2011
Flash is not read limited. Only write limited.

The point being, that if you imagine a computer that reconfigures its circuitry to suit the task, you will quickly run out of erase cycles on a typical FPGA chip.

A modern multitasking processor changes "context", i.e. the program it operates on, several hundred or thousand times a second, and there's a hundred different programs running at any given time. Even if you didn't have the erase cycle limitations of flash memory, the FPGA circuitry can't keep up re-programming to optimize itself for different tasks, and that will negate the benefit of being able to change the circuitry.

It only works if you're running one program at a time, but in such cases you almost never have to change the program.

PinkElephant
not rated yet Jan 04, 2011
@Eikka,
Even if you didn't have the erase cycle limitations of flash memory, the FPGA circuitry can't keep up re-programming to optimize itself for different tasks, and that will negate the benefit of being able to change the circuitry.
Not necessarily. Think of an SOC, that can rapidly reconfigure itself to either very efficiently playback a high-definition 3D movie with surround sound, or very efficiently ray-trace 3D graphics and simulate physics for an interactive game, or very efficiently run a huge spreadsheet, or very efficiently host a website with a backing database, or very efficiently process and route communications in a peer-to-peer ad-hoc network, or indeed very efficiently simulate the neural net of a virtual pet. I'm thinking of a very compact and power-efficient device (the size of a cell phone or smaller) that could be the only computer you'll ever need. Naturally, 10 to 20 years into the future...
RodRico
not rated yet Jan 05, 2011
Some facts: Most large FPGAs are RAM based, and they can be programmed from other processors. Futhermore, parts of their logic can be reprogrammed while other parts run. Logic in one area of the FPGA can even program logic in another area.

The issue with large FPGAs is cost. 75% of the die area in an FPGA is dedicated to programmed connections, so a hard-wired ASIC will always be smaller, faster, and consume less power. The ASIC will also be cheaper *IF* the production run is large enough to offset the custom design cost.
RodRico
not rated yet Jan 05, 2011
Folks that are interested, by the way, should Google "Reconfigurable Computing." Regular folks can even get into this game thanks the recent emergence of Xilinx Spartan-6 devices ($150 for a pretty large/fast FPGA with 8 lanes at PCI Express speeds). The tools for these devices are free and can be downloaded off Xilinx's web site. Designs for many processors can be downloaded for free at OpenCores.org, or you can roll-your own (as I did).
shadfurman
not rated yet Jan 07, 2011
The reason why we generally don't use FPGAs for general purpose computing is that you rarely need to re-configure a chip in a product. They're mainly used for research and developement, and one-off special products like in industrial automation where making just one regular ASIC chip would cost too much (because you need to make ten thousand of them to turn a profit.


I don't know, I can imagine an FPGA being used in small devices to decode video, then maybe reconfigure to do some UI processing, then maybe reconfigure to do so BOINC computing while your device charges, then maybe reconfigure to do some 3D computations for a game.

More news stories

Quantenna promises 10-gigabit Wi-Fi by next year

(Phys.org) —Quantenna Communications has announced that it has plans for releasing a chipset that will be capable of delivering 10Gbps WiFi to/from routers, bridges and computers by sometime next year. ...

New US-Spanish firm says targets rich mobile ad market

Spanish telecoms firm Telefonica and US investment giant Blackstone launched a mobile telephone advertising venture on Wednesday, challenging internet giants such as Google and Facebook in a multi-billion-dollar ...

Unlocking secrets of new solar material

(Phys.org) —A new solar material that has the same crystal structure as a mineral first found in the Ural Mountains in 1839 is shooting up the efficiency charts faster than almost anything researchers have ...

Floating nuclear plants could ride out tsunamis

When an earthquake and tsunami struck the Fukushima Daiichi nuclear plant complex in 2011, neither the quake nor the inundation caused the ensuing contamination. Rather, it was the aftereffects—specifically, ...