'Jaguar' supercomputer gaining speed

Titan will be at least twice as fast as today's fastest supercomputer, which is located in Japan
File photo shows a staff member of Japan's national Riken institute opening a rack of supercomputer, "K Computer", at Riken's laboratory in Japan on June 21, 2011. Cray Inc. said it has sealed a deal to overhaul the US Department of Energy's "Jaguar" supercomputer, making it faster than any other machine on the planet. The new supercomputer will be renamed "Titan".

Cray Inc. said it has sealed a deal to overhaul the US Department of Energy's "Jaguar" supercomputer, making it faster than any other machine on the planet.

The supercomputer at the DOE Oak Ridge National Laboratory will be renamed "Titan" after it is beefed up with speedy, powerful chips from California companies NVIDIA and .

"All areas of science can benefit from this substantial increase in , opening the doors for new discoveries that so far have been out of reach," said associate lab director for computing Jeff Nichols.

"Titan will be used for a variety of important research projects, including the development of more commercially viable biofuels, cleaner burning engines, safer nuclear energy and more efficient solar power."

NVIDIA specializes in GPU () chips used to enable seamless, rich graphics and smooth action in videogames by processing myriad tasks simultaneously through parallel computing.

Rival company AMD will provide powerful chips that process data in sequence as is standard in home or work computers.

"Oak Ridge's decision to base Titan on Tesla GPUs underscores the growing belief that GPU-based heterogeneous computing is the best approach to reach exascale computing levels within the next decade," said NVIDIA Steve Scott.

Cray valued the multi-year contract at more than $97 million and said that Titan will be at least twice as fast and three times as energy efficient as today's , which is located in Japan.


Explore further

Nvidia chip team gets 25 million dollars from US military

(c) 2011 AFP

Citation: 'Jaguar' supercomputer gaining speed (2011, October 12) retrieved 23 August 2019 from https://phys.org/news/2011-10-jaguar-supercomputer-gaining.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
0 shares

Feedback to editors

User comments

Oct 12, 2011
Nvidia is awesome; I am still struck by the blow of AMD buying ATI.I can no longer keep my eggs in one basket... I have a high- end Nv and it rocks the numbers. Doesn't Microsoft need to bask a bit in the light for its new version of dx Them there shader cells work pretty good if you ask me, and and the architecture required of dx? Perfect for parrallel computing.

Oct 12, 2011
How fast??

Oct 12, 2011
Well, the K computer is rated 8.16 petaFLOPS, so if they hit their target of 2x, it would be 16.32 petaFLOPS.

Hope that helps.

Oct 12, 2011
Nvidia is awesome; I am still struck by the blow of AMD buying ATI.I can no longer keep my eggs in one basket... I have a high- end Nv and it rocks the numbers. Doesn't Microsoft need to bask a bit in the light for its new version of dx Them there shader cells work pretty good if you ask me, and and the architecture required of dx? Perfect for parrallel computing.

I agree. My Nvidea GTX 580 does quite well on einstein@home. What they didn't quite make clear in the article though: Does Tesla use Nvidea GPU's or do they make their own?

Oct 12, 2011
Oakridge using CUDA is asinine. AMD has put in the heavy lifting to optimize Bulldozer with GPGPUs for it's 6000 models of GPGPUs and OpenCL does run circles around CUDA. More to the point, Cray is committed to LLVM/Clang and Apple/AMD/ARM/Intel/IBM all use LLVM/Clang with OpenCL 1.1. When LLVM 3.x stack is released in a few weeks, I'm sure Cray will be pushing for AMD Radeon HD 6990 Graphics or its FireStream Processor line.

AMD FireStream 9350 / 9370 GPU Compute Accelerators stomps Tesla all over the place.

Oct 12, 2011
I agree. My Nvidea GTX 580 does quite well on einstein@home. What they didn't quite make clear in the article though: Does Tesla use Nvidea GPU's or do they make their own?


Tesla is an Nvidia product line, not a separate company. The most recent Tesla iterations are based on the Fermi architecture.

Oct 12, 2011
@Horus,

The choice of AMD over Intel for CPUs is a dubious one (considering Intel is superior both on performance and TDP); Bulldozer has turned out to be quite a disappointment. For instance, see here:

http://www.anandt...0-tested

I don't see what the big deal is about OpenCL and LLVM: both should be able to work just as well with Intel and NVIDIA hardware. You make it sound like NVIDIA doesn't support OpenCL, which is just plain false. In fact, NVIDIA was one of OpenCL's original inventors and proponents!

http://developer....m/opencl

As for specific cards, the horse race never ends. NVIDIA will have Kepler arriving into retail channels sometime early in 2012, which is supposed to be a huge leap above the current Fermi architecture. In GPUs, like in CPUs, performance per watt is very important when talking about massive installations; it could be that NVIDIA is simply more efficient than AMD and was chosen because of that.

Oct 14, 2011
AMD probably offers the best integer performance for the buck.
Hardly:

http://www.anandt...tested/7

Ignore the 3DSMax and Cinebench, and focus on 7-zip, Par2, TrueCrypt, and Chromium Compile tests -- these are all pretty much purely integer workloads. Only in 7-zip is the latest, biggest, and fastest Bulldozer just barely edging out an ordinary, desktop-class Sandy Bridge chip -- a design that's nearly a year old now. And what about power consumption at full load?

http://www.anandt...tested/9

THAT'll cost a buck or two.
scheduled to see a 30 percent of more increase in performance over the next couple of years
As if Intel wasn't? Within the next 6 months, Intel's next-gen Ivy Bridge comes out on a next-gen 22-nm 3D-tri-gate process, with something like a further 50% TDP reduction per work unit. That's light-years ahead of AMD on process alone, never mind architecture...

Oct 15, 2011
Titan will be around 20 Petaflops. But with chips from AMD on basis of 45nm !??? Hopefully they use 28nm or less to reduce the energy needs !

Oct 15, 2011
@Vendicar_Decarian,
The only two benchmarks in your reference that are applicable to integer performance
You don't think compilation (of Chromium) is pretty much a pure-integer workload? You don't think computation and application of error-correcting codes (Par2) is a pure-integer workload?
the integer benchmarks show the CPU to be on par with Intel's offering
Hardly, barely on par in the BEST cases; significantly behind in the worst cases.
the integer bang for the buck is greater for AMD
How so? The i7 2600 can be had for less than $300 *retail*. Wholesale, bulk pricing would be significantly lower. The fastest Phenom II X6, goes for about $200 in retail. Bulldozer will cost more, if for no other reason than its huge size. So even if we take the retail cost differential (which will compress on bulk/wholesale), you only have a $100 difference.

Considering Bulldozer uses ~70 Watts more than i7 2600K at full load, how much does the $100 "savings" buy you?

ctd.

Oct 15, 2011
Let's be generous and assume Bulldozer eats 50 Watts more per hour in typical workloads, and operates only 20 hours/day. That's 1 kWh per day. Let's say it operates in such a regime for 330 days out of each year. That's a yearly 330 kWh in extra energy per CPU. Assume, again generously, a rough average of 10 c/kWh. This gives annual cost of $33. In 3 years, you're at break-even.

And we haven't yet accounted for the extra cooling costs. So, with even super-generous assumptions, in less than 3 years the AMD-powered supercomputer loses out on cost to an Intel-powered one. As for performance, we've already established that Intel does better.

So where exactly is this mythical "bang for the buck" that AMD is supposed to be providing here?

Oct 16, 2011
it is also a single threaded workload with lots of pipeline stalls.
Pipeline stalls, perhaps -- which is where AMD's deep pipeline is a significant handicap, even despite their attempts at improved branch prediction. But single-threaded? Visual Studio is perfectly capable of spawning multiple threads when compiling large solutions. Also, not all problems and all code in real life would be embarrassingly parallel. Indeed, I don't expect real-life simulations to approach anywhere near the levels of CPU utilization and 100% thread workload that you see in some of those benchmarks. And by the way in real code, not all FP code will be offloaded onto co-processors, either.
In servers running multiple virtual windows environments...
Yes, but here we're talking about a supercomputer running simulations. A bit of a different use case.
And at the end of life for the server, Intel or AMD.
Supercomputers tend to last a lot longer than 3 years.

Oct 16, 2011
Are you seriously comparing AMD to intel? Intel beats AMD in every aspect its not even funny anymore. AMD might be cheaper yes but they also suck big time.

Anyway performance alone is not a comparative unit cycles/watt for ex is better.

"As for performance, we've already established that Intel does better." - Pink Elephant

Hardly. Your benchmarks are for existing apps, optimized for Intel CPU's. In fact, Windows hasn't yet got proper support for the new AMD CPU's with regard to thread allocation. Windows 8 does, and apparently that alone increases AMD benchmark scores by around 6%.

I expect supercomputer programmers will do a little better.


Nor does windows utilize Intel CPU's fully otherwise they wouldn't run on AMD's at all. Your points are merit.

Oct 16, 2011
"You don't think compilation (of Chromium) is pretty much a pure-integer workload?" - Pink Elephant

It is. And it is also a single threaded workload with lots of pipeline stalls.


Ah here comes the monkey out of the sleeve. You are talking with uranus. I compile chromium regularly with 4 threads just fine.

Oct 16, 2011
and significantly better than today in terms of cost per CPU cycle.

Intel is already years ahead of AMD, even in power consumption.
Sure AMD's might consume less energy, now let an intel CPU match the wattage. Consumes same energy but still faster than AMD's.

smaller gates mean that defect rates are going to rise.

You mean less yields? What the hell are you talking about? If you have a defect in a core they just disable the core and sell it less than successive yields, no prob, not just for AMD but for all.

result in higher power consumption and the resulting thermal issues.

Wrong. Source? get out.

If I were building a highest performance personal PC possible, I would probably go with Intel CPU's. But for a massively parallel

The only downside of getting intels is cost, not performance or power consumption which both beat AMD's.

Faster =/= more power consumption. in fact faster can mean the task is done faster and can underclock the core sooner.

Oct 16, 2011
I happen to have a 4 core Intel machine and a 6 core AMD machine here at the moment. The 4 core Intel machine is faster per core, but the extra two cores in the AMD box just about make up for the difference.

And the AMD CPU is less expensive.

Your only valid point you made so far.

Xeons and i7's are pretty expensive thats the most likely reason they chose AMD over intel. But they might as well chose intel and save the electric bill.

Oct 16, 2011
"I compile chromium regularly with 4 threads just fine" - Kaas...

Check your CORE utilization figures and get back to us will you?

Because of code dependencies I strongly doubt those threads are properly mapped to each core, and that each core is actually being utilized to anywhere near maximum.

But yes, I have confirmed that MS-Build is multi-threaded.

Each object file (.cpp) can be compiled independently. Learn the difference between link time and compile time.

Oct 16, 2011
There are virtually differences between the Intel and AMD instruction sets, and where those differences do exist they are for instructions that have been implemented for specific tasks like cryptography or DRM.


http://en.wikiped...icrocode

For every day code generation though, the compilers try to optimize for the most popular CPU's and that would be Intel.


Wrong. GCC selects generic X86 which means it should run equally well on all x86 cpu. However since i run the programs on my own computer i just compile with -march=native. I can choose to optimize for intel or amd cpu's and i believe even VIA.

Not that these compilers optimize well.


Compilers optimize well beyond human capabilities.

"Each object file (.cpp) can be compiled independently." - Kaas

Ya, they can be in principle.

Now what is your CPU core utilization numbers?


Not in principle in practise.

again learn the difference between link-time and compile-time.

Oct 16, 2011
Python runs thousands of times slower than optimized code. Perhaps tens of thousands of times slower.

Python is a scripting language not a coding language, it requires a "VM" just like java and .NET

also look this:
http://www.tomsha...-10.html

Yup, the cost per computation is less with AMD. This is particularly true for integer calculations and is the reason why AMD CPU's are very popular for web server farms and supercomputers.


But they cost more in the electric bill.


Oct 16, 2011
Scripting or coding - no real difference - it hardly matters.


Script languages are interpreted at run-time. Code languages are compiled before running.

Oct 16, 2011
"Here you go." - kaas

58% and 49% CPU usage by each instance of the compiler.

Sad performance.


50% meaning 100% of one core.
It shows both cores are at 100% in the top.

Oct 16, 2011
"50% meaning 100% of one core.
It shows both cores are at 100% in the top." - Kaas

Only two cores? Try 8.

Why are you running two instances of the compiler? Or is that normal behaviour in the text based Lintard world?


No i configured it as such "-j3" (means 3 jobs). When i had hyperthreading on i had "-j5" (meaning 5 jobs 4 for compiling).

I am starting to believe you are just trolling or are really stupid , i am guessing both.. Calling linux retarded? It is the most used kernel on the planet. and why is there coming a windows server version which is text only, oh maybe because it makes sense?

this supercomputer most likely uses linux. you simlply dont know anything about computer tech.

Oct 16, 2011
It isn't the Linux kernel that makes Unix Retarded.


linux =/= unix

Most probably. It is free after all. Being free is the only way Unix can compete in the marketplace.

No, because it can do many things that windows cant.

And even though it is free, it just can't get past the 2 percent of the desktop market that it currently occupies.

That is because most computers come with windows. People dont know the difference between windows or linux, they dont even know what an operating system is.

But that is what you get when you have an unusable interface and an OS that has no real hardware support.

Linux supports more hardware than windows.(even windows8 with ARM). And ubuntu.

Given it's spectacular history of failure after failure, I no longer blame the OS, but the religious fervor with which Lintards embrace it's perpetual failure.


Calling NASA a failure? I remember a dutch lecture from NASA talking about modifying linux to run in space shuttle

Oct 16, 2011
"No i configured it as such "-j3" (means 3 jobs)." - kaas

None of those cryptic compiler switches are shown on your listing. Just as well as such switches are an offensive relic of 1950's era computing.

Its not a compiler flag, its a maketool flag. if you look you can actually see 3 gcc processes running and a make process.

Oct 16, 2011
Absolutely right. But as we all know Linux = Unix.

=/= means not, linux was derived from, but it is nothing like.

Like run on a wrist watch or a toaster controller.

It runs on all routers, that includes the routers that make up the internet.

No one cares.

everyone cares, obviously as do you.

But the fact is, people abandon Linux as fast as it's installed.

I know many people who enjoy ubuntu.

Whatever. Linux is yesterdays OS for people content to live in the past.

What do you use? An os designed for dumb monkeys?

Primitive tool nonsense wrapped in command line driven gibberish.

That maketool allows programs to be made for any operating system/architecture.

I happen to agree with the design decision made by the engineers at Cray.

Cray engineers are obviously not credible, they never mentioned surface area and power consumption. Still claim to be the fastest, which is obvious since its one of the newest and biggest.NocomparitiveData

Oct 16, 2011
How about using "not" to mean not.

It is : http://en.wikiped...equation

This is one of the many, many reasons why I drank at toast to Ritchie's death.


You are glad someone lost his life?

Why do you use a 1900x1024 graphic screen to render 1950's teletype output?


Are you that dumb? dont you see chromium browser on the background? I program on this machine, i do documents. graphics editing, play games etc.

Wow, even ones that the compiler and linker don't support.

the maketool runs natively and calls whatever compiler/linker.

Oct 16, 2011
Meh, another AMD v Intel / unix v Windows argument. And a dig at Dennis Ritchie. VD, I'd suggest you take a read of
http://www.thereg...bituary/

I could point out that 99% of the internet runs on unix in everything from routers, switches, DNS servers etc, but I won't.
It's the right tool for some jobs, MS windows is the right tool for others, let it lie.

Oct 16, 2011
What a stupid discussion.

VD, you object to command-line tools? You think GUI-based tools are somehow different? Peek underneath the pretty exterior of Visual Studio, and what do you find? Project settings files overflowing with command switches for compilers and linkers. MSBuild is a command-line, text-driven engine. You want a GUI for coding on Linux? Never heard of Eclipse, I take it?

What's all this trash talk about Unix/Linux being unusable, when Apple succeeded merely by slapping a pretty UI on top of BSD Unix, and blowing Microsoft out of the water? You want to talk about security, buffer overflows, etc? Well, I suppose you find Microsoft's offerings more secure than Unix/Linux? WELL, DO YOU???

There are languages out there that take initialization and security seriously. The managed languages, for instance, such as Java or C#. Guess what, none of this would exist if it weren't boot-strapped by *nix, C/C-plus-plus, and text-based development tools.

Oct 16, 2011
The Gospel of Cray Engineers also needs a dose of reality. Do you realize, VD, how many times in the past those mythical geniuses at Cray nearly succeeded at running their company into the ground? It is a bloody miracle that Cray is still in existence today, actually.

Yes, I don't doubt AMD gave them a big discount on the processors. I also have no doubt Intel would've given an equally big discount. The publicity alone is worth selling the processors for such an application at cost.

On the basis of performance, scalability, versatility, power efficiency -- you name it -- I just don't understand that particular choice by Cray. The only possibility that makes any sense to me, is that some Cray project manager got a nice kickback from AMD under the table. IOW, good old-fashioned corruption.

And before you accuse me of being an Intel fanboy, I used to like AMD a lot, especially back in the Athlon days. But in recent years, they have really dropped the ball a few times too many.

Oct 16, 2011
Sorry charlie, but the Apple OS is based on Mach.
The kernel may be based on Mach (which is also a Unix-like kernel, BTW), but the rest was largely BSD. http://en.wikiped...Mac_OS_X
It depends upon the complexity of the command.
That's why people create GUIs that hide the complexity of the command.
I often think that Microsoft is trying it's best to recreate every failure in Unix.
Or more likely, they're trying to develop their software in a modular and flexible manner. But you don't sound like you have much experience with software architecture...
designed by inferiors without any thought of security.
Security is a luxury. Back in the day, your apps had to run on laughable hardware, fit within laughable memory constraints, and yet still feature real-time-capable, fast-response functionality. We can afford to talk about security today, because today's laptop PCs are faster than the supercomputers of 20 years ago. So, STFU.

Oct 16, 2011
There is a limited global market for supercomputers.
And yet, few companies in that sphere floundered as badly or as frequently as Cray.
Your failure to listen is your problem, not mine.
The feeling is mutual. You've been shown that Bulldozer is inferior on performance both in lightly-threaded workloads, and in highly-threaded workloads that are not pure-integer tasks. There is a VERY narrow, and in practice UNREALISTIC, window where Bulldozer does only very slightly better.

You've been shown that Bulldozer is woefully inferior on power dissipation, with all the attendant costs not just in terms of direct electricity consumption but also thermal design constraints, cooling and venting overhead, and maintenance thereof.

The prices of the processors are not sufficiently different to justify those trade-offs.

But whatever. I don't feel like debating this with you any further is going to contribute usefully to anything, so I'll stop here.

Oct 16, 2011
"I could point out that 99% of the internet runs on unix." - TheDog

Please don't misquote my name

Whoop-de-doo. HTTP, NNTP, UDP, TCP and all of the other protocols can be and have been implemented on a bloody 6502 - without any OS at all.

I've just done a quick search for any reference to this claim and find none, a link would be appreciated.

Unix is used for one reason and one reason only.
It is free.

No, it is not.

Oct 16, 2011
"VD, I'd suggest you take a read of .." - TheDog

Ya, Ritchie the man who brought the world one of the worst programming languages possible, the worst Operating system imaginable, and who's sheer incompetence flooded the world with buffer overflows and uninitialized buffers by design - a legacy that lives on to this day in the form of 90% of the code exploits on and off the web.

For those intellectual crimes alone, Ritchie should have had his ignorant throat slit decades ago.


He gave programmers a tool, some were surgeons, some were butchers.

Oct 17, 2011
VD stop being an idiot, you aren't intelligent.

Unix is not free, it costs money. Linux is free but it is not Unix, it is derived from Unix, but it is very different.
In fact all operating systems are practicly derived from Unix, but they have changed so much that you can't call them Unix anymore, even Windows.

You are just trolling this thread, and i wonder what OS you use. Ignorant hypocritical bastard.

http://www.zdnet....soft/459

Oct 17, 2011
Concerning Ritcie: I don't think anyone wants to put out bad code on purpose. Many times coders throw something togther just to get it working, intending to go back and re-write the code. But at some point the marketing and numbers folks force coders to release their work before it's properly debugged and ready.

I remember back in the early 90's when Simutronics (an on-line gaming company. I was doing part-time development for their GemStone3/4 product.) made 3D cells available to its coders. I had never worked with cells before and neither had my co-workers, but I had the cell data entry, retrival and sorting routines in work-able shape in a few weeks. It took me over a month to tune the code where it worked quickly.

Often when we are under the gun for production we kludge something together - mainly to test other code. Marketing folks don't understand the difference between a kludge and working product until after it is released and customers start screaming.

Oct 17, 2011
It's easy to throw stones at the leader, especially when we don't have to live or work within their limitations. All instructions take time, memory and cycles. All of which were in short supply in the dawn of the computer age.

While I disagree with your assessment about the original product, I do agree with your assessment of later iterations. There is no excuse for not fixing their product, even if they had to step back and start from scratch. Still, I would wish death upon no one, incompetent or not.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more