Intel unveils Knights Corner - 1 teraflop chip

Nov 17, 2011 by Bob Yirka report
Rajeeb Hazra, General Manager of Intel Technical Computing Group holding “Knights Corner” - Intel Many Core Architecture co-processor capable of delivering more than 1 TFLOPS of double precision performance.

(PhysOrg.com) -- Rajeeb Hazra, Intel’s general manger of technical computing, surprised a group attending this year’s SC11 conference, at a steak house in Seattle this past week, by holding up a single chip and declaring "It's not a PowerPoint, it's a real chip." He was referring to the processing chip Intel has created that is capable of performing at 1 teraflops, called the Knights Corner, it is, unlike its rivals, based on the x86 architecture that still sits at the base of most desktop machines in use today.

The SC conference is a meeting for those in the high performance computing arena, thus it was no coincidence that was fully prepared to unveil its chip, which it is clearly proud of.

The chip attains its high processing speeds by making use of multiple processors, or a Many Integrated Core - MIC architecture; in this case, more than 50, which pretty much puts to shame the quad-core technology being advertised for use in computers used by regular people. The new chip will first be installed in a machine at the Texas Advanced Computing Center, which expects the system to run at 10 petaflops.

The announcement of the chip has industry insiders marveling once again at the progress being made in systems architecture. It was just fourteen years ago that Intel showed off its first computer capable of running at 1 teraflop, a machine that required almost 10,000 Pentium chips and took up all of 72 cabinets. Putting all that power in one new chip reduces power consumption dramatically.

The new chip isn’t meant to be used as a CPU though, instead it’s to serve as a coprocessor, taking on specific, highly computational routines, helping to bump up the overall speed of a computer, much the same way are used in desktop PC’s.

And speaking of graphics processors, the announcement of the Knights Corner means Intel is taking direct aim at Nvidia and AMD, two companies that make graphics processors but who have also branched out into making their coprocessors a useful component in superfast computers. Thus, the stakes have just been raised.

Intel says its product is a better fit for most current systems due to its being based on x86 architecture, because adopters won’t have to port their applications to a new technology, unlike its competitors.

Intel also took advantage of the spotlight it garnered with the announcement of its Knights Corner to declare the that company has set a goal of attaining exascale speeds by 2018, which would mean a 100 fold increase over current technology. Computers running at such speeds would open up doors to new results oriented computing such as better weather prediction, or figuring out what really happens when cars crash, and would of course be wanted by the military to calculate super secret stuff.

Explore further: 50 Cent, Intel team up on heart-monitor headphones

More information: Press release

Related Stories

Intel launches chip for tablet computers

Apr 11, 2011

Intel Corp. has launched a new chip for tablet computers, Atom processor Z670 based platform, as the world's most powerful semiconductor company aims to become a contender in the market for mobile chips.

Intel Launches Three New Quad-core Processors

Sep 08, 2009

(PhysOrg.com) -- Intel has launched three new quad-core processors utilizing Intel's new Nehalem architecture. These processors, formerly codenamed Lynnfield, are aimed at desktop computers, as well as the ...

AMD's Bulldozer architecture to battle Intel's Core i7

Mar 15, 2011

(PhysOrg.com) -- AMD's upcoming four-, six- and eight-core processors, code name Zambezi, is based on its Bulldozer architecture and will target Intel’s flagship 9000 series six-core desktop CPU’s. ...

will.i.am teams up with Intel

Jan 25, 2011

Technology fan will.i.am, the frontman of The Black Eyed Peas, is teaming up with US computer chip giant Intel.

Recommended for you

Google to help boost Greece's tourism industry

3 hours ago

Internet giant Google will offer management courses to 3,000 tourism businesses on the island of Crete as part of an initiative to promote the sector in Greece, industry union Sete said on Thursday.

Enabling a new future for cloud computing

4 hours ago

The National Science Foundation (NSF) today announced two $10 million projects to create cloud computing testbeds—to be called "Chameleon" and "CloudLab"—that will enable the academic research community ...

Hitchhiking robot reaches journey's end in Canada

7 hours ago

A chatty robot with an LED-lit smiley face sent hitchhiking across Canada this summer as part of a social experiment reached its final destination Thursday after several thousand kilometers on the road.

Microsoft to unveil new Windows software

8 hours ago

A news report out Thursday indicated that Microsoft is poised to give the world a glimpse at a new-generation computer operating system that will succeed Windows 8.

Music site SoundCloud to start paying artists

10 hours ago

SoundCloud said Thursday that it will start paying artists and record companies whose music is played on the popular streaming site, a move that will bring it in line with competitors such as YouTube and Spotify.

User comments : 24

Adjust slider to filter visible comments by rank

Display comments: newest first

StevenLjr
5 / 5 (8) Nov 17, 2011
In my computer, now!
bugmenot23
not rated yet Nov 17, 2011
You hear him! Now, now now!
gwrede
4.8 / 5 (6) Nov 17, 2011
or figuring out what really happens when cars crash, and would of course be wanted by the military to calculate super secret stuff.
LOL Most probably. The day of Kurzweilian Singularity is nigh!
SteveL
5 / 5 (1) Nov 17, 2011
In my computer, now!

That's my boy! (I want one in my computer too!)
Norezar
1.5 / 5 (4) Nov 17, 2011
It'll be a decade or two before anything like this hits home consumer desktops.

If they were to release early they'd not be able to milk the market for every frequency/core count variation imaginable with "new" product lines.
El_Nose
5 / 5 (2) Nov 17, 2011
there is almost no software in use today in the normal consumer market that would benefit from that chip -- but it would be a cool toy
antialias_physorg
4.8 / 5 (4) Nov 17, 2011
In my computer, now!

And what would it do in your computer? Your processor is sitting idle 99% of the time as it is. You have no software that could use it (i.e. that is geared towards massively parallel computing).

Your OS would just use a single core most of the time (and one core on this chip has LESS performance than what you get on one high end single-core chip for standard consumer computers)

So yeah - you'd be paying handsomly for an effective downgrade in experienced performance.
Royale
5 / 5 (2) Nov 17, 2011
lol antialias. It's funny when everyone gets a dose of reality thrown at them.
You're absolutely correct. These chips are designed for scientific studies where they have universities and professionals literally lined up for months or years ahead of time. With the co-processor always running at full tilt it makes sense...
that_guy
5 / 5 (2) Nov 17, 2011
Yep, this chip has little or no bearing on the personal computer market.

If it makes you feel any better, the fastest i7 chips run at about 120GFLOPS - 20GFLOPS per core. This is actually the same speed per core as knight's corner. Basically, you're getting the same thing without all the unnecessary cores.

Now your video card with Nvidia's Tesla M2090 actually runs at 665 GFlops, but it has 512 cores.

All that said, GFlops are a terrible comparison when comparing consumer to scientific computing. Your processor is optimized for what you will do with it software-wise, which doesn't need a lot of floating point calculations (compared to other operations). A lot of the GFLOPS power of the i7 actually goes unused if you have a discrete video card. How about that?

Graphics (And many scientific simulations), on the other hand, need to do a lot of FLOPS, which is why your graphics card smokes your processor on this measurement.
Objectivist
4.3 / 5 (3) Nov 17, 2011
Intel unveils Knights Corner - 1 teraflop chip

... more than 1 TFLOPS of double...

... capable of performing at 1 teraflops...

... system to run at 10 petaflops.

... capable of running at 1 teraflop...

I was going to point out that "flop [sic!]" is not the singular form of FLOPS, which of course has no singular form due to it being an abbreviation. Instead, as I began copying the quotes above, I noticed the inconsistent usage (and misusage) of the term which lead me to conclude that Bob Yirka's job involves a lot of copying and pasting rather than researching.
that_guy
not rated yet Nov 17, 2011
If intel released this chip for consumers, it would be their most expensive flop ever, in TeraFLOPS proportions. Oh hohoho, terrible puns.

@Objectivist: while you are technically correct (FLOPS= FLoating point OPeration per Second) - I'd like to point out that FLOPS is one of the most unweildy acronyms to use, and your mind gets messed up with the counterintuitive usage.

And FLOP in itself is not necessarily wrong - you could say TeraFLOP per second. If you said FLOP, then it is more incomplete than wrong.
Objectivist
2.5 / 5 (2) Nov 17, 2011
I didn't claim FLOP was incorrect per se. I merely claimed that "flop [sic!]" (or FLOP) is not the singular form of FLOPS, as it is being used in the article.

Furthermore incomplete in this case is also completely wrong. That "per second" is crucial for the unit. Removing "per second" creates a completely different, and in this case uninteresting, unit. Remove "per second" from watt and you end up with joule. According to your logic saying joule instead of watt is "more incomplete than wrong." I doubt your high school physics teacher would agree with you on that.
emsquared
5 / 5 (1) Nov 17, 2011
And what would it do in your computer? Your processor is sitting idle 99% of the time as it is. You have no software that could use it (i.e. that is geared towards massively parallel computing).

Your OS would just use a single core most of the time (and one core on this chip has LESS performance than what you get on one high end single-core chip for standard consumer computers)

So yeah - you'd be paying handsomly for an effective downgrade in experienced performance.

Guess I'll just have to keep using my liquid nitrogen for flash-freezing / shattering bananas, bagels, etc.
Ricochet
5 / 5 (3) Nov 18, 2011
.
I got mah
Flips flopped
And mah
FLOPS flipped
Put all the grains
In a little chip

Put it in mah box
And flipped the switch
Made the elecron soup
And took a sip

Now I'm super-turbo
With a hyper servo
Crunchin numbers at a factor
Of warp 10, bro!

I'd write more but busy as hell here @ work...
SteveL
5 / 5 (3) Nov 18, 2011
In my computer, now!

And what would it do in your computer? Your processor is sitting idle 99% of the time as it is. You have no software that could use it (i.e. that is geared towards massively parallel computing).

Your OS would just use a single core most of the time (and one core on this chip has LESS performance than what you get on one high end single-core chip for standard consumer computers)

So yeah - you'd be paying handsomly for an effective downgrade in experienced performance.

For most people you would be correct, but not when it comes to either of us. (StevenLjr and myself) We both participate in Distributed Computing where our computers are used to help with research problems. As an example: einstein@home has no difficulty utilizing 100% of all 4 CPU cores in this system and 100% of my GTX 580 GPU. We have to go the extra mile when it comes to cooling our systems, but it's how we contribute.
wwqq
5 / 5 (1) Nov 19, 2011
It'll be a decade or two before anything like this hits home consumer desktops.


I have a 2 TFLOPS(single precision) chip in my computer at this very moment. It's called a GPU; and it's not even a particularly expensive or powerful one(it's a HD 6870).
TabulaMentis
not rated yet Nov 19, 2011
How does this new chip change the timeline, if any, mentioned in the following Wikipedia.org statement: "Desktop computers will have the same processing power as human brains by the year 2029."

Link: http://en.wikiped...lligence

Links for Speed of Thought:

http://www.answer...ew/23027

http://www.scienc...ys.shtml
Ober
not rated yet Nov 19, 2011
I'd just like to second the Distributed Computing post by SteveL.

CERN relies on distributed computing, hence their massive network they built for the LHC.

As out theoretical understanding of the universe grows, so does our need for more computing power.
jimbo92107
not rated yet Nov 20, 2011
With a chip like this, I could open Word, read an email and print a document, all at the same time! My life will be easy!
Graeme
5 / 5 (1) Nov 20, 2011
Home applications can include more realistic scene rendering, better quality video compression, handwriting recognition, speech recognition, photo deblurring, as well as eninstein@home.
rwinners
not rated yet Nov 20, 2011
" and would of course be wanted by the military to calculate super secret stuff."

Like keeping track of everyone everywhere 24/7.
Callippo
not rated yet Nov 21, 2011
Each core supports four threads, so we could actually have two hundreds of processors in Program Manager visible with hyperthreading option enabled.

http://img696.ima...core.jpg
antialias_physorg
not rated yet Nov 21, 2011
Home applications can include more realistic scene rendering,

High end graphics cards already work massively parallel (this is why some parallel super-computers are nothing more than a lot of PS3s in a room). This chip wouldn't speed up scene rendering. Working on the GPU is way slower than on the CPU.

For comparison: We're currently working on a software that, among other things, needs to filter large images. Some of these filters benefit from parallelization.

We did a test run on a quad core CPU and a high end graphics card for one of these filters.

CPU: 650 seconds
GPU: 24 seconds
SteveL
not rated yet Nov 21, 2011
So, if a 50-core CPU could run 50 graphic cards (with 1 or more GPU's on each - with hundreds of CUDA cores in each GPU..), well, it could be an awesome mini computer system for distributed computing tasks. Or just the CPU could be used simply for small business server-based processing.