'Jaguar' supercomputer gaining speed

Oct 12, 2011
File photo shows a staff member of Japan's national Riken institute opening a rack of supercomputer, "K Computer", at Riken's laboratory in Japan on June 21, 2011. Cray Inc. said it has sealed a deal to overhaul the US Department of Energy's "Jaguar" supercomputer, making it faster than any other machine on the planet. The new supercomputer will be renamed "Titan".

Cray Inc. said it has sealed a deal to overhaul the US Department of Energy's "Jaguar" supercomputer, making it faster than any other machine on the planet.

The supercomputer at the DOE Oak Ridge National Laboratory will be renamed "Titan" after it is beefed up with speedy, powerful chips from California companies NVIDIA and .

"All areas of science can benefit from this substantial increase in , opening the doors for new discoveries that so far have been out of reach," said associate lab director for computing Jeff Nichols.

"Titan will be used for a variety of important research projects, including the development of more commercially viable biofuels, cleaner burning engines, safer nuclear energy and more efficient solar power."

NVIDIA specializes in GPU () chips used to enable seamless, rich graphics and smooth action in videogames by processing myriad tasks simultaneously through parallel computing.

Rival company AMD will provide powerful chips that process data in sequence as is standard in home or work computers.

"Oak Ridge's decision to base Titan on Tesla GPUs underscores the growing belief that GPU-based heterogeneous computing is the best approach to reach exascale computing levels within the next decade," said NVIDIA Steve Scott.

Cray valued the multi-year contract at more than $97 million and said that Titan will be at least twice as fast and three times as energy efficient as today's , which is located in Japan.

Explore further: Singing the same tune: Scientists develop novel ways of separating birdsong sources

add to favorites email to friend print save as pdf

Related Stories

NVIDIA GPUs power world's fastest supercomputer

Oct 29, 2010

(PhysOrg.com) -- NVIDIA has built the worldэs fastest supercomputer using 7,000 of its graphics processor chips. With a horsepower equivalent to 175,000 laptop computers, its sustained performance is ...

NVIDIA Ushers In the Era of Personal Supercomputing

Jun 21, 2007

High-performance computing in fields like the geosciences, molecular biology, and medical diagnostics enable discoveries that transform billions of lives every day. Universities, research institutions, and ...

NVIDIA Introduces New Integrated GeForce 9400M GPU

Oct 15, 2008

Demand for better visual computing performance continues to grow as more and more applications tap the massively parallel processing power of the graphics processing unit (GPU) for more than just graphics. ...

Recommended for you

SR Labs research to expose BadUSB next week in Vegas

23 minutes ago

A Berlin-based security research and consulting company will reveal how USB devices can do damage that can conduct two-way malice, from computer to USB or from USB to computer, and can survive traditional ...

US warns retailers on data-stealing malware

2 hours ago

US government cybersecurity watchdogs warned retailers Thursday about malware being circulated that allows hackers to get into computer networks and steal customer data.

Android grabs 85% of smartphone market: survey

2 hours ago

Smartphones powered by the Android operating system captured 85 percent of the worldwide market in the second quarter, threatening to marginalize rival platforms, a new survey shows.

User comments : 89

Adjust slider to filter visible comments by rank

Display comments: newest first

hard2grep
not rated yet Oct 12, 2011
Nvidia is awesome; I am still struck by the blow of AMD buying ATI.I can no longer keep my eggs in one basket... I have a high- end Nv and it rocks the numbers. Doesn't Microsoft need to bask a bit in the light for its new version of dx Them there shader cells work pretty good if you ask me, and and the architecture required of dx? Perfect for parrallel computing.
Ramael
1 / 5 (2) Oct 12, 2011
How fast??
SCVGoodToGo
5 / 5 (1) Oct 12, 2011
Well, the K computer is rated 8.16 petaFLOPS, so if they hit their target of 2x, it would be 16.32 petaFLOPS.

Hope that helps.
SteveL
not rated yet Oct 12, 2011
Nvidia is awesome; I am still struck by the blow of AMD buying ATI.I can no longer keep my eggs in one basket... I have a high- end Nv and it rocks the numbers. Doesn't Microsoft need to bask a bit in the light for its new version of dx Them there shader cells work pretty good if you ask me, and and the architecture required of dx? Perfect for parrallel computing.

I agree. My Nvidea GTX 580 does quite well on einstein@home. What they didn't quite make clear in the article though: Does Tesla use Nvidea GPU's or do they make their own?
Horus
1 / 5 (1) Oct 12, 2011
Oakridge using CUDA is asinine. AMD has put in the heavy lifting to optimize Bulldozer with GPGPUs for it's 6000 models of GPGPUs and OpenCL does run circles around CUDA. More to the point, Cray is committed to LLVM/Clang and Apple/AMD/ARM/Intel/IBM all use LLVM/Clang with OpenCL 1.1. When LLVM 3.x stack is released in a few weeks, I'm sure Cray will be pushing for AMD Radeon HD 6990 Graphics or its FireStream Processor line.

AMD FireStream 9350 / 9370 GPU Compute Accelerators stomps Tesla all over the place.
aroc91
not rated yet Oct 12, 2011
I agree. My Nvidea GTX 580 does quite well on einstein@home. What they didn't quite make clear in the article though: Does Tesla use Nvidea GPU's or do they make their own?


Tesla is an Nvidia product line, not a separate company. The most recent Tesla iterations are based on the Fermi architecture.
Vendicar_Decarian
1 / 5 (3) Oct 12, 2011
Gonna be used to simulate fire in America.

Gonna be used to simulate the entire planet in Japan.
PinkElephant
5 / 5 (1) Oct 12, 2011
@Horus,

The choice of AMD over Intel for CPUs is a dubious one (considering Intel is superior both on performance and TDP); Bulldozer has turned out to be quite a disappointment. For instance, see here:

http://www.anandt...0-tested

I don't see what the big deal is about OpenCL and LLVM: both should be able to work just as well with Intel and NVIDIA hardware. You make it sound like NVIDIA doesn't support OpenCL, which is just plain false. In fact, NVIDIA was one of OpenCL's original inventors and proponents!

http://developer....m/opencl

As for specific cards, the horse race never ends. NVIDIA will have Kepler arriving into retail channels sometime early in 2012, which is supposed to be a huge leap above the current Fermi architecture. In GPUs, like in CPUs, performance per watt is very important when talking about massive installations; it could be that NVIDIA is simply more efficient than AMD and was chosen because of that.
Vendicar_Decarian
1 / 5 (4) Oct 14, 2011
"The choice of AMD over Intel for CPUs is a dubious one " - PinkieWInkie

The AMD CPU's are being used to feed the Fermi GPU's. And it is the Fermi GPU's that are doing the floating point work.

Bulldozer performs well on integer operations, and is scheduled to see a 30 percent of more increase in performance over the next couple of years.

AMD probably offers the best integer performance for the buck.
PinkElephant
5 / 5 (1) Oct 14, 2011
AMD probably offers the best integer performance for the buck.
Hardly:

http://www.anandt...tested/7

Ignore the 3DSMax and Cinebench, and focus on 7-zip, Par2, TrueCrypt, and Chromium Compile tests -- these are all pretty much purely integer workloads. Only in 7-zip is the latest, biggest, and fastest Bulldozer just barely edging out an ordinary, desktop-class Sandy Bridge chip -- a design that's nearly a year old now. And what about power consumption at full load?

http://www.anandt...tested/9

THAT'll cost a buck or two.
scheduled to see a 30 percent of more increase in performance over the next couple of years
As if Intel wasn't? Within the next 6 months, Intel's next-gen Ivy Bridge comes out on a next-gen 22-nm 3D-tri-gate process, with something like a further 50% TDP reduction per work unit. That's light-years ahead of AMD on process alone, never mind architecture...
Vendicar_Decarian
2.3 / 5 (3) Oct 14, 2011
"Hardly" - Pink Elephant

The only two benchmarks in your reference that are applicable to integer performance are the zip and truecript benchmarks as these would be using floating point. The rendering benchmarks will be most certainly be using floating point.

In any case, the integer benchmarks show the CPU to be on par with Intel's offering, but the AMD CPU is less costly.

Hence the integer bang for the buck is greater for AMD, which is what you are denying.

"As if Intel wasn't? Within the next 6 months, Intel's next-gen Ivy Bridge comes out on a next-gen 22-nm 3D-tri-gate process..." - Pink Elephant

Yup, and to be competitive, AMD is going to have to license the tri-gate transistor technology. Not doing so will mean that they won't be able to compete with Intel in the arena of low power parts. Particularly as they try to reduce transistor sizes.

But that is 6 months from now. For the moment, AMD has the integer edge in cost per operation, as your own refs show.
Buyck
not rated yet Oct 15, 2011
Titan will be around 20 Petaflops. But with chips from AMD on basis of 45nm !??? Hopefully they use 28nm or less to reduce the energy needs !
PinkElephant
not rated yet Oct 15, 2011
@Vendicar_Decarian,
The only two benchmarks in your reference that are applicable to integer performance
You don't think compilation (of Chromium) is pretty much a pure-integer workload? You don't think computation and application of error-correcting codes (Par2) is a pure-integer workload?
the integer benchmarks show the CPU to be on par with Intel's offering
Hardly, barely on par in the BEST cases; significantly behind in the worst cases.
the integer bang for the buck is greater for AMD
How so? The i7 2600 can be had for less than $300 *retail*. Wholesale, bulk pricing would be significantly lower. The fastest Phenom II X6, goes for about $200 in retail. Bulldozer will cost more, if for no other reason than its huge size. So even if we take the retail cost differential (which will compress on bulk/wholesale), you only have a $100 difference.

Considering Bulldozer uses ~70 Watts more than i7 2600K at full load, how much does the $100 "savings" buy you?

ctd.
PinkElephant
not rated yet Oct 15, 2011
Let's be generous and assume Bulldozer eats 50 Watts more per hour in typical workloads, and operates only 20 hours/day. That's 1 kWh per day. Let's say it operates in such a regime for 330 days out of each year. That's a yearly 330 kWh in extra energy per CPU. Assume, again generously, a rough average of 10 c/kWh. This gives annual cost of $33. In 3 years, you're at break-even.

And we haven't yet accounted for the extra cooling costs. So, with even super-generous assumptions, in less than 3 years the AMD-powered supercomputer loses out on cost to an Intel-powered one. As for performance, we've already established that Intel does better.

So where exactly is this mythical "bang for the buck" that AMD is supposed to be providing here?
Vendicar_Decarian
1 / 5 (2) Oct 15, 2011
"You don't think compilation (of Chromium) is pretty much a pure-integer workload?" - Pink Elephant

It is. And it is also a single threaded workload with lots of pipeline stalls.

These CPU's are designed for multiple threads, and if you look at the 7zip and Rar benchmarks you find that the new AMD chips perform on par with Intel's. Slightly better on 7zip, and slightly worse on multithreaded RAR.

And this is how these AMD chips will primarily be used. In servers running multiple virtual windows environments processing javascript or some drek code written in Python.

The thing is, the Intel offering comes in at just over $300 while the AMD chips come in at around $200. So the cost per operation for AMD is lower.

"This gives annual cost of $33. In 3 years, you're at break-even." - PinkElephant.

And at the end of life for the server, Intel or AMD.

Vendicar_Decarian
1 / 5 (2) Oct 15, 2011
"As for performance, we've already established that Intel does better." - Pink Elephant

Hardly. Your benchmarks are for existing apps, optimized for Intel CPU's. In fact, Windows hasn't yet got proper support for the new AMD CPU's with regard to thread allocation. Windows 8 does, and apparently that alone increases AMD benchmark scores by around 6%.

I expect supercomputer programmers will do a little better.
PinkElephant
not rated yet Oct 16, 2011
it is also a single threaded workload with lots of pipeline stalls.
Pipeline stalls, perhaps -- which is where AMD's deep pipeline is a significant handicap, even despite their attempts at improved branch prediction. But single-threaded? Visual Studio is perfectly capable of spawning multiple threads when compiling large solutions. Also, not all problems and all code in real life would be embarrassingly parallel. Indeed, I don't expect real-life simulations to approach anywhere near the levels of CPU utilization and 100% thread workload that you see in some of those benchmarks. And by the way in real code, not all FP code will be offloaded onto co-processors, either.
In servers running multiple virtual windows environments...
Yes, but here we're talking about a supercomputer running simulations. A bit of a different use case.
And at the end of life for the server, Intel or AMD.
Supercomputers tend to last a lot longer than 3 years.
kaasinees
1 / 5 (1) Oct 16, 2011
Are you seriously comparing AMD to intel? Intel beats AMD in every aspect its not even funny anymore. AMD might be cheaper yes but they also suck big time.

Anyway performance alone is not a comparative unit cycles/watt for ex is better.

"As for performance, we've already established that Intel does better." - Pink Elephant

Hardly. Your benchmarks are for existing apps, optimized for Intel CPU's. In fact, Windows hasn't yet got proper support for the new AMD CPU's with regard to thread allocation. Windows 8 does, and apparently that alone increases AMD benchmark scores by around 6%.

I expect supercomputer programmers will do a little better.


Nor does windows utilize Intel CPU's fully otherwise they wouldn't run on AMD's at all. Your points are merit.
kaasinees
1 / 5 (1) Oct 16, 2011
"You don't think compilation (of Chromium) is pretty much a pure-integer workload?" - Pink Elephant

It is. And it is also a single threaded workload with lots of pipeline stalls.


Ah here comes the monkey out of the sleeve. You are talking with uranus. I compile chromium regularly with 4 threads just fine.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"Visual Studio is perfectly capable of spawning multiple threads when compiling large solutions." - PinkElephant

Really? I hadn't noticed.

"Also, not all problems and all code in real life would be embarrassingly parallel." - Pink Elephant

Things are often quite limited with course grained parallelism, but there are a rather huge number of opportunities for fine grained parallelism just above, at or just below the opcode level.

"I don't expect real-life simulations to approach anywhere near the levels of CPU utilization and 100% thread workload that you see in some of those benchmarks." - Pink Elephant

Well, if that were the case then there would be little utility in using the massively parallel computational abilities of modern graphic cards in those simulations.

The fact is that FERMI graphic cards part of this supercomputer and are intended for massively parallel processing tells me that you are quite wrong in your expectation and should have told you as well.

cont
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
The fact is, weather it be Intel or AMD CPU's, CPU based floating point won't be much of an issue on these supercomputers. The CPU's are primarily used for controlling the program flow and moving blocks of data rapidly into and out of video memory for processing by the GPU.

In fact for supercomputer design, I doubt very much if the speed of the CPU makes much of a difference to the system performance, as long as you are within the ballpark.

You seem to be interested in defending Intel CPU's at all costs. As for myself, I couldn't care less who has the fsster chips, and have no difficulty in admitting that Intel has for several years now been on top in terms of performance.

AMD however undercuts them in terms of price and to some extent feature sets that are attractive to the mobile market, so they have some market share for that reason.

Intel could squash AMD like a bug if it wanted, but needs AMD as an alternative source for parts for various government contracts that demand...
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
such a thing.

Further, I doubt if Intel want's to be known as a monopoly. The existence of AMD as a viable source for parts makes such a label impossible.

Bulldozer is clearly a capable CORE design, and will continue to be so as they tweak the design and make it more efficient. In two years, Bulldozer will be where the best Intel chips are today in terms of performance, and significantly better than today in terms of cost per CPU cycle.

The biggest problem AMD is going to have over the next while is going to smaller gate sizes. Bulldozer has something like 2 billion transistors per chip and smaller gates mean that defect rates are going to rise.

Intel also has a big advantage with it's tri-gate technology which will allow them to reduce unwanted drain on it's transistors which result in higher power consumption and the resulting thermal issues.

If I were building a highest performance personal PC possible, I would probably go with Intel CPU's. But for a massively parallel
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
supercomputer I would most probably go with AMD CPU's to reduce overall cost while maintaining high throughput to the GPU's which are doing virtually all of the computations.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"AMD might be cheaper yes but they also suck big time. " - Kaas

Depends on what you are after. I am more interested in the number of cores than actual throughput through any individual core. So AMD makes for a better choice.

I happen to have a 4 core Intel machine and a 6 core AMD machine here at the moment. The 4 core Intel machine is faster per core, but the extra two cores in the AMD box just about make up for the difference.

And the AMD CPU is less expensive.
kaasinees
1 / 5 (1) Oct 16, 2011
and significantly better than today in terms of cost per CPU cycle.

Intel is already years ahead of AMD, even in power consumption.
Sure AMD's might consume less energy, now let an intel CPU match the wattage. Consumes same energy but still faster than AMD's.

smaller gates mean that defect rates are going to rise.

You mean less yields? What the hell are you talking about? If you have a defect in a core they just disable the core and sell it less than successive yields, no prob, not just for AMD but for all.

result in higher power consumption and the resulting thermal issues.

Wrong. Source? get out.

If I were building a highest performance personal PC possible, I would probably go with Intel CPU's. But for a massively parallel

The only downside of getting intels is cost, not performance or power consumption which both beat AMD's.

Faster =/= more power consumption. in fact faster can mean the task is done faster and can underclock the core sooner.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"I compile chromium regularly with 4 threads just fine" - Kaas...

Check your CORE utilization figures and get back to us will you?

Because of code dependencies I strongly doubt those threads are properly mapped to each core, and that each core is actually being utilized to anywhere near maximum.

But yes, I have confirmed that MS-Build is multi-threaded.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"Anyway performance alone is not a comparative unit cycles/watt for ex is better." - Kaas...

Cycles per watt only became important as a propaganda tool to keep tools interested when they abandoned the computations per second benchmark.

In cycles per watt, ARM beats everyone. Hands down.
kaasinees
1 / 5 (1) Oct 16, 2011
I happen to have a 4 core Intel machine and a 6 core AMD machine here at the moment. The 4 core Intel machine is faster per core, but the extra two cores in the AMD box just about make up for the difference.

And the AMD CPU is less expensive.

Your only valid point you made so far.

Xeons and i7's are pretty expensive thats the most likely reason they chose AMD over intel. But they might as well chose intel and save the electric bill.
kaasinees
1 / 5 (1) Oct 16, 2011
"I compile chromium regularly with 4 threads just fine" - Kaas...

Check your CORE utilization figures and get back to us will you?

Because of code dependencies I strongly doubt those threads are properly mapped to each core, and that each core is actually being utilized to anywhere near maximum.

But yes, I have confirmed that MS-Build is multi-threaded.

Each object file (.cpp) can be compiled independently. Learn the difference between link time and compile time.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"Nor does windows utilize Intel CPU's fully otherwise they wouldn't run on AMD's at all." - kaas

There are virtually differences between the Intel and AMD instruction sets, and where those differences do exist they are for instructions that have been implemented for specific tasks like cryptography or DRM.

For these tasks programmers typically implement several different code paths for different popular CPU's, one of which is usually given to them by Intel itself - heavily optimized of course.

I presume AMD does the same thing.

For every day code generation though, the compilers try to optimize for the most popular CPU's and that would be Intel.

Not that these compilers optimize well. They still perform poorly, typically by a factor of 4 to 6 times slower than truly optimal code, and sometimes as slow as several hundred times slower than optimal code. And this presumes that they are generating machine level code rather than parsing text strings as is done in Python etc.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
Python runs thousands of times slower than optimized code. Perhaps tens of thousands of times slower.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"Each object file (.cpp) can be compiled independently." - Kaas

Ya, they can be in principle.

Now what is your CPU core utilization numbers?
kaasinees
1 / 5 (1) Oct 16, 2011
There are virtually differences between the Intel and AMD instruction sets, and where those differences do exist they are for instructions that have been implemented for specific tasks like cryptography or DRM.


http://en.wikiped...icrocode

For every day code generation though, the compilers try to optimize for the most popular CPU's and that would be Intel.


Wrong. GCC selects generic X86 which means it should run equally well on all x86 cpu. However since i run the programs on my own computer i just compile with -march=native. I can choose to optimize for intel or amd cpu's and i believe even VIA.

Not that these compilers optimize well.


Compilers optimize well beyond human capabilities.

"Each object file (.cpp) can be compiled independently." - Kaas

Ya, they can be in principle.

Now what is your CPU core utilization numbers?


Not in principle in practise.

again learn the difference between link-time and compile-time.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"result in higher power consumption and the resulting thermal issues." - Vendicar

"Wrong. Source? get out." - Kaas...

Tiny transistors are leaky transistors. Leaky transistors are energy inefficient transistors. If you want higher transistor counts and lower thermal losses then you have to find a way to decrease the leakage. That is what Intel's tri-gate technology is all about. And it is the only way they are going to be able to compete with ARM in terms of overall power consumption.

You might ask yourself why Intel is moving to tri-gate technology if it isn't for the purpose of reducing power consumption.

"The only downside of getting intels is cost" - kaas...

Yup, the cost per computation is less with AMD. This is particularly true for integer calculations and is the reason why AMD CPU's are very popular for web server farms and supercomputers.

Bulldozer gives AMD a continued future in that market.

However against tri-gate they are in trouble.

kaasinees
not rated yet Oct 16, 2011
Python runs thousands of times slower than optimized code. Perhaps tens of thousands of times slower.

Python is a scripting language not a coding language, it requires a "VM" just like java and .NET

also look this:
http://www.tomsha...-10.html

Yup, the cost per computation is less with AMD. This is particularly true for integer calculations and is the reason why AMD CPU's are very popular for web server farms and supercomputers.


But they cost more in the electric bill.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"Compilers optimize well beyond human capabilities." - Kaas

Ya, morons have been regurgitating that lie for years, and it is less true today than ever before.

I have an interesting lecture on my HD where a code optimizer for AE games sends an hour lamenting the fact that he can regularly beat C compilers at code optimization by a factor of 200 or more.

That is his modified C runs 200 times faster than the optimized code that the compiler would normally generate.

And on top of that there is undoubtedly another factor of 2 to 4 in speed improvement that can be had by hand placing the opcodes.

So on that basis alone, for games at least, C compilers can be seen to be somewhere around 400 to 800 times slower than properly optimized code.

For normal instruction mixes though, C compilers do somewhat better - coming in at around 4 to 10 times slower than hand optimized code.

There have been essentially improvements in compiler optimization efficiency in the last 20 years.

Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
In terms of code generation size, C compilers can be even worse. Back in the late 90's I reverse engineered a SCSI driver by hand and managed to reduce the size by a factor of 10 compared to it's C generated counterpart. Or perhaps it was a factor of 20. I actually forget. But somewhere around there.

The grotesque inefficiency in that instance was caused by the compiler inlining delay loops (for speed) and unnecessarily passing parameters on the stack rather than doing it in registers.

Back in the early 2000's I rewrote the gif decompressor for netscape - as a lark - and reduced it's size by a factor of 6, and increased it's speed similarly. It was written in C of course, and C optimizes very poorly.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"Python is a scripting language not a coding language. - Kaas

And yet scripting languages are used to write more code than coding languages. So your distinction is an artificial one.

Scripting or coding - no real difference - it hardly matters. It runs 10,000 times slower than properly optimized code if not slower.

As for ergs per computational cycle, that is a distinction that became important only when Intel could no longer raise clock speeds.

Only Intel Fanboys are fixated on such a self serving metric.

Thinking people correctly see it is Intel's way of altering the playing field now that AMD has surpassed Intel in raw clock speed.

kaasinees
not rated yet Oct 16, 2011
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"GCC selects generic X86 which means it should run equally well on all x86 cpu" - Kaas

That would be most difficult since INC and DEC have been reassigned in the new x86 instruction sets and now correspond to new opcode prefixes.

Try running the output from a 386 compiler of yesteryear on a modern x86 machine (native) and watch how fast it crashes.

kaasinees
1 / 5 (1) Oct 16, 2011
Scripting or coding - no real difference - it hardly matters.


Script languages are interpreted at run-time. Code languages are compiled before running.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"Here you go." - kaas

58% and 49% CPU usage by each instance of the compiler.

Sad performance.

Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"Script languages are interpreted at run-time. Code languages are compiled before running." - kaas

And compiled languages like C are typically running 10 times slower than they should, while scripted languages 10,000 time slower.

kaasinees
1 / 5 (1) Oct 16, 2011
"Here you go." - kaas

58% and 49% CPU usage by each instance of the compiler.

Sad performance.


50% meaning 100% of one core.
It shows both cores are at 100% in the top.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
http://www.anandt...tested/9

Bulldozer consumes about twice the power as Intel's best offerings. About 200 watts for a fully utilized Bulldozer.

But then Fermi is going to come in around 400 to 500 watts.

So you save around 1/7 th the power by going to Intel. But you increase your CPU costs by a factor of 4 to 8 depending on how aggressively they discount the CPU's in volume.

The Jaguar designers selected AMD as the best source of it's CPUs.

You can't understand why even though it has been repeatedly explained to you.

Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"50% meaning 100% of one core.
It shows both cores are at 100% in the top." - Kaas

Only two cores? Try 8.

Why are you running two instances of the compiler? Or is that normal behaviour in the text based Lintard world?
kaasinees
1 / 5 (1) Oct 16, 2011
"50% meaning 100% of one core.
It shows both cores are at 100% in the top." - Kaas

Only two cores? Try 8.

Why are you running two instances of the compiler? Or is that normal behaviour in the text based Lintard world?


No i configured it as such "-j3" (means 3 jobs). When i had hyperthreading on i had "-j5" (meaning 5 jobs 4 for compiling).

I am starting to believe you are just trolling or are really stupid , i am guessing both.. Calling linux retarded? It is the most used kernel on the planet. and why is there coming a windows server version which is text only, oh maybe because it makes sense?

this supercomputer most likely uses linux. you simlply dont know anything about computer tech.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"Calling linux retarded? It is the most used kernel on the planet." - kaas

It isn't the Linux kernel that makes Unix Retarded.

"this supercomputer most likely uses linux." - Kaas

Most probably. It is free after all. Being free is the only way Unix can compete in the marketplace. And even though it is free, it just can't get past the 2 percent of the desktop market that it currently occupies.

But that is what you get when you have an unusable interface and an OS that has no real hardware support.

It is sad that when you add all of the millions of man years of effrort that have gone into Unix, and it is still a complete failure as an OS that people can use outside of toaster controllers and digital banana holders.

Given it's spectacular history of failure after failure, I no longer blame the OS, but the religious fervor with which Lintards embrace it's perpetual failure.

Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"No i configured it as such "-j3" (means 3 jobs)." - kaas

None of those cryptic compiler switches are shown on your listing. Just as well as such switches are an offensive relic of 1950's era computing.

I don't live in the 1950's and would rather not do so. Hence I avoid 1950's era tools like the perpetual failure that is Unix.
kaasinees
1 / 5 (1) Oct 16, 2011
It isn't the Linux kernel that makes Unix Retarded.


linux =/= unix

Most probably. It is free after all. Being free is the only way Unix can compete in the marketplace.

No, because it can do many things that windows cant.

And even though it is free, it just can't get past the 2 percent of the desktop market that it currently occupies.

That is because most computers come with windows. People dont know the difference between windows or linux, they dont even know what an operating system is.

But that is what you get when you have an unusable interface and an OS that has no real hardware support.

Linux supports more hardware than windows.(even windows8 with ARM). And ubuntu.

Given it's spectacular history of failure after failure, I no longer blame the OS, but the religious fervor with which Lintards embrace it's perpetual failure.


Calling NASA a failure? I remember a dutch lecture from NASA talking about modifying linux to run in space shuttle
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"and why is there coming a windows server version which is text only" - kaas

Is there?

I sometimes think that Microsoft is trying to repeat every Unix failure in existence.

I attribute it to Unix pollution from the legions of poor programmers they hire. Inferior ideas from legions of inferior minds.

Having said that, you don't need a graphical interface to operate a server. A windowed text based system is all you really need.

kaasinees
1 / 5 (1) Oct 16, 2011
"No i configured it as such "-j3" (means 3 jobs)." - kaas

None of those cryptic compiler switches are shown on your listing. Just as well as such switches are an offensive relic of 1950's era computing.

Its not a compiler flag, its a maketool flag. if you look you can actually see 3 gcc processes running and a make process.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"linux =/= unix" - Kaas

Absolutely right. But as we all know Linux = Unix.

"No, because it can do many things that windows cant." - Kaaz

Like run on a wrist watch or a toaster controller.

No one cares.

"That is because most computers come with windows." - Kaas

Partly true. But the fact is, people abandon Linux as fast as it's installed. It is always never ready for prime time.

Face it. Unix couldn't even compete against DOS.

"Linux supports more hardware than windows.(even windows8 with ARM)." - Kaas

Astonishing. Why doesn't it support the hardware mouse pointer on my machine 1, or accelerated graphics on machine 2, or the high speed IDE ports on machine 3? Or the audio on machine 4?

And why did version 6 cack all over itself when I told Linux to that it had permission to update itself from it's central server?

Whatever. Linux is yesterdays OS for people content to live in the past.

Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"I remember a dutch lecture from NASA talking about modifying linux to run in space shuttle." - Kaas

I remember that the space shuttle program was a colossal white elephant that is finally and thankfully scrapped. It was a program that did great damage to NASA and to space science.

I don't blame those failures on Unix. I just wouldn't use the sad and laughable episode in American history as an example of anything positive.

With Apollo NASA went to the moon with less computing power than are in modern pocket calculators. Unix is a product from that era that remains a throwback to those ancient horse and buggy times that refuses to modernize.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"Its not a compiler flag, its a maketool flag." - kaas

Primitive tool nonsense wrapped in command line driven gibberish.

A perfect example of why Linux is such a spectacular command line driven failure.
Vendicar_Decarian
2.3 / 5 (3) Oct 16, 2011
In any case, the issue is the use of AMD CPU's in supercomputer designs.

A Linux fanbouy thinks that Intel CPU's would be better.

The engineers at Cray Research disagree, and have decided to use AMD CPU's.

I happen to agree with the design decision made by the engineers at Cray.

kaasinees
1 / 5 (1) Oct 16, 2011
Absolutely right. But as we all know Linux = Unix.

=/= means not, linux was derived from, but it is nothing like.

Like run on a wrist watch or a toaster controller.

It runs on all routers, that includes the routers that make up the internet.

No one cares.

everyone cares, obviously as do you.

But the fact is, people abandon Linux as fast as it's installed.

I know many people who enjoy ubuntu.

Whatever. Linux is yesterdays OS for people content to live in the past.

What do you use? An os designed for dumb monkeys?

Primitive tool nonsense wrapped in command line driven gibberish.

That maketool allows programs to be made for any operating system/architecture.

I happen to agree with the design decision made by the engineers at Cray.

Cray engineers are obviously not credible, they never mentioned surface area and power consumption. Still claim to be the fastest, which is obvious since its one of the newest and biggest.NocomparitiveData
Vendicar_Decarian
2 / 5 (4) Oct 16, 2011
"=/= means not" - kaas

How about using "not" to mean not.

This is one of the many, many reasons why I drank at toast to Ritchie's death.

The world is a better place without him.

"It runs on all routers" - kaas

and washing machines and clock radio's. I am =/= impressed.

"What do you use? An os designed for dumb monkeys?" - Kaas

Because there a better things to use my brain for than memorizing command line gibberish.

Why do you use a 1900x1024 graphic screen to render 1950's teletype output?

All that is needed is a single character screen and some arrow keys right?

"That maketool allows programs to be made for any operating system/architecture." - Kaas

Wow, even ones that the compiler and linker don't support.

That is one impressive make tool.

"Cray engineers are obviously not credible" - Kaas

But Lintard Fanbouys are.

Excuse me while I laugh.

kaasinees
1 / 5 (1) Oct 16, 2011
How about using "not" to mean not.

It is : http://en.wikiped...equation

This is one of the many, many reasons why I drank at toast to Ritchie's death.


You are glad someone lost his life?

Why do you use a 1900x1024 graphic screen to render 1950's teletype output?


Are you that dumb? dont you see chromium browser on the background? I program on this machine, i do documents. graphics editing, play games etc.

Wow, even ones that the compiler and linker don't support.

the maketool runs natively and calls whatever compiler/linker.
TehDog
1 / 5 (1) Oct 16, 2011
Meh, another AMD v Intel / unix v Windows argument. And a dig at Dennis Ritchie. VD, I'd suggest you take a read of
http://www.thereg...bituary/

I could point out that 99% of the internet runs on unix in everything from routers, switches, DNS servers etc, but I won't.
It's the right tool for some jobs, MS windows is the right tool for others, let it lie.
Vendicar_Decarian
1 / 5 (3) Oct 16, 2011
"VD, I'd suggest you take a read of .." - TheDog

Ya, Ritchie the man who brought the world one of the worst programming languages possible, the worst Operating system imaginable, and who's sheer incompetence flooded the world with buffer overflows and uninitialized buffers by design - a legacy that lives on to this day in the form of 90% of the code exploits on and off the web.

For those intellectual crimes alone, Ritchie should have had his ignorant throat slit decades ago.

Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"I could point out that 99% of the internet runs on unix." - TheDog

Whoop-de-doo. HTTP, NNTP, UDP, TCP and all of the other protocols can be and have been implemented on a bloody 6502 - without any OS at all.

Unix is used for one reason and one reason only.

It is free.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"You are glad someone lost his life?" - Kaas

For his sheer incompetence and damage he has done, I drank a glass of wine in celebration of his death. Yes, I am very pleased with his death.

"Are you that dumb? dont you see chromium browser on the background?" - Kaas

I do and laugh at the idea that you are using it to display the output from an emulator of a 1950's teletype.

It is gratifying to see that state of the art Unix/Linux has just advanced past the use of punch cards that vanished in the late 1970's.

80 x 25 text based terminals man.... Why change Unix perfection?

Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"the maketool runs natively and calls whatever compiler/linker." - Kaas

In other words it does nothing of substance.

But why change Unix perfection ay?

It's good to keep these inferior text based traditions -x -f:ab //-!@lambda c x ?home~1 excrement=Level:2

Right?

Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"It is : http://en.wikiped...equation" - kaas

Which doesn't list "=/="

How about using the word "not" rather than stupidity?

"<" and "!=" are also perfectly acceptable forms for not equal.

Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
http://www.physor...ies.html

A good week. First the Fascist Jobe and now the Incompetent Ritchie.

A very good week indeed.
PinkElephant
1 / 5 (1) Oct 16, 2011
What a stupid discussion.

VD, you object to command-line tools? You think GUI-based tools are somehow different? Peek underneath the pretty exterior of Visual Studio, and what do you find? Project settings files overflowing with command switches for compilers and linkers. MSBuild is a command-line, text-driven engine. You want a GUI for coding on Linux? Never heard of Eclipse, I take it?

What's all this trash talk about Unix/Linux being unusable, when Apple succeeded merely by slapping a pretty UI on top of BSD Unix, and blowing Microsoft out of the water? You want to talk about security, buffer overflows, etc? Well, I suppose you find Microsoft's offerings more secure than Unix/Linux? WELL, DO YOU???

There are languages out there that take initialization and security seriously. The managed languages, for instance, such as Java or C#. Guess what, none of this would exist if it weren't boot-strapped by *nix, C/C-plus-plus, and text-based development tools.
PinkElephant
1 / 5 (1) Oct 16, 2011
The Gospel of Cray Engineers also needs a dose of reality. Do you realize, VD, how many times in the past those mythical geniuses at Cray nearly succeeded at running their company into the ground? It is a bloody miracle that Cray is still in existence today, actually.

Yes, I don't doubt AMD gave them a big discount on the processors. I also have no doubt Intel would've given an equally big discount. The publicity alone is worth selling the processors for such an application at cost.

On the basis of performance, scalability, versatility, power efficiency -- you name it -- I just don't understand that particular choice by Cray. The only possibility that makes any sense to me, is that some Cray project manager got a nice kickback from AMD under the table. IOW, good old-fashioned corruption.

And before you accuse me of being an Intel fanboy, I used to like AMD a lot, especially back in the Athlon days. But in recent years, they have really dropped the ball a few times too many.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
What's all this trash talk about Unix/Linux being unusable, when Apple succeeded merely by slapping a pretty UI on top of BSD Unix" - PinkElephant

Sorry charlie, but the Apple OS is based on Mach.

"VD, you object to command-line tools?" - PinkElephant

It depends upon the complexity of the command. If you have to pass more than 4 or 5 parameters and switches then it is time to use an alternative method.

In general command lines should just be avoided or else you end up with a Garbage like the LinTard OS.

"MSBuild is a command-line, text-driven engine." - Pink Elephant

So? As I said earlier, I often think that Microsoft is trying it's best to recreate every failure in Unix.

"There are languages out there that take initialization and security seriously. The managed languages, for instance' - Pink Elephant

Management is a half baked half working solution to problems that wouldn't exist if the originating language (C) wasn't designed by inferiors without any thought of security.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"Do you realize, VD, how many times in the past those mythical geniuses at Cray nearly succeeded at running their company into the ground?" - Pink Elephant

There is a limited global market for supercomputers. Basing your company on the production and sale of supercomputers means you are making a big bet.

"On the basis of performance, scalability, versatility, power efficiency -- you name it -- I just don't understand that particular choice by Cray." - Pink Elephant

It is because on a cost per cycle basis, AMD is going to give them more bang for the dollar.

It has already been explained to you. Your failure to listen is your problem, not mine.

"But in recent years, they have really dropped the ball a few times too many." - Pink Elephant

Absorbing ATI cost them at least a couple of years of innovation I think.

PinkElephant
1 / 5 (1) Oct 16, 2011
Sorry charlie, but the Apple OS is based on Mach.
The kernel may be based on Mach (which is also a Unix-like kernel, BTW), but the rest was largely BSD. http://en.wikiped...Mac_OS_X
It depends upon the complexity of the command.
That's why people create GUIs that hide the complexity of the command.
I often think that Microsoft is trying it's best to recreate every failure in Unix.
Or more likely, they're trying to develop their software in a modular and flexible manner. But you don't sound like you have much experience with software architecture...
designed by inferiors without any thought of security.
Security is a luxury. Back in the day, your apps had to run on laughable hardware, fit within laughable memory constraints, and yet still feature real-time-capable, fast-response functionality. We can afford to talk about security today, because today's laptop PCs are faster than the supercomputers of 20 years ago. So, STFU.
PinkElephant
1 / 5 (1) Oct 16, 2011
There is a limited global market for supercomputers.
And yet, few companies in that sphere floundered as badly or as frequently as Cray.
Your failure to listen is your problem, not mine.
The feeling is mutual. You've been shown that Bulldozer is inferior on performance both in lightly-threaded workloads, and in highly-threaded workloads that are not pure-integer tasks. There is a VERY narrow, and in practice UNREALISTIC, window where Bulldozer does only very slightly better.

You've been shown that Bulldozer is woefully inferior on power dissipation, with all the attendant costs not just in terms of direct electricity consumption but also thermal design constraints, cooling and venting overhead, and maintenance thereof.

The prices of the processors are not sufficiently different to justify those trade-offs.

But whatever. I don't feel like debating this with you any further is going to contribute usefully to anything, so I'll stop here.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"The kernel may be based on Mach (which is also a Unix-like kernel, BTW), but the rest was largely BSD." - Pink Elephant

Sorry Charlie. But Mach was designed from the ground up as an object oriented OS. They just couldn't graft BSD components onto it because the underlying data structures and implementation methods are different.

"That's why people create GUIs that hide the complexity of the command." - Pink Elephant

Fools hide. Rational people replace.

That is the principle problem of the Lintard OS. Too many fools writing too many layers to hide too many design and implementation failures.

"Or more likely, they're trying to develop their software in a modular and flexible manner." - Pink Elephant

I wouldn't advise anyone to emulate the perpetual failure that is Unix.

Only a fool would do that.

"Security is a luxury.' - Pink Elephant

That must have been Ritchie's idea when he decided to code his standard library so that it would be guaranteed to crash any application if..
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
crash any application if the user entered a string input that was too long, or entered malformed strings of numbers for conversion into the internal supported types.

And that is one reason why his death was long overdue.

Back in the day, your apps had to run on laughable hardware, fit within laughable memory constraints, and yet still feature real-time-capable, fast-response functionality." - Pink Elephant

So of course you write them in a crap language like C so that they run 8 times slower and take up 4 times as much space as necessary.

Makes sense to you apparently.

Makes zero sense to me.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"There is a VERY narrow, and in practice UNREALISTIC, window where Bulldozer does only very slightly better." - Pink Elephant

And as I have shown that performance difference is meaningless as it only represents a tiny fraction of the computational burden of the machine, and have shown that on a cost per computational cycle, AMD provides the better computational value.

Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"The prices of the processors are not sufficiently different to justify those trade-offs." - pink elephant

Titan will consist of something like 60,000 CPU's and 720,000 processor cores. 16 cores per CPU.

What is the maximum core count for Intel's best offering?

Gulftown has 6 cores

TehDog
1 / 5 (1) Oct 16, 2011
"I could point out that 99% of the internet runs on unix." - TheDog

Please don't misquote my name

Whoop-de-doo. HTTP, NNTP, UDP, TCP and all of the other protocols can be and have been implemented on a bloody 6502 - without any OS at all.

I've just done a quick search for any reference to this claim and find none, a link would be appreciated.

Unix is used for one reason and one reason only.
It is free.

No, it is not.
TehDog
1 / 5 (1) Oct 16, 2011
"VD, I'd suggest you take a read of .." - TheDog

Ya, Ritchie the man who brought the world one of the worst programming languages possible, the worst Operating system imaginable, and who's sheer incompetence flooded the world with buffer overflows and uninitialized buffers by design - a legacy that lives on to this day in the form of 90% of the code exploits on and off the web.

For those intellectual crimes alone, Ritchie should have had his ignorant throat slit decades ago.


He gave programmers a tool, some were surgeons, some were butchers.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
http://ip65.sourceforge.net/

IP65 is a TCP/IP stack for 6502 based computers.

Search time 0.16 seconds

GeckOS is an experimental operating system for MOS 6502 and compatible processors. It offers some Unix-like functionality including preemptive multitasking, multithreading, semaphores, signals, binary relocation, TCP/IP networking via SLIP and a 6502 standard library.

TCP/IP stack in 6502 assembler

http://www.ataria...sembler/

etc... etc.. etc...

I've even seen a web server including the TCP/IP stack implemented on a basic stamp microcontroller.

http://ca.digikey...p-tcp-ip

The original implementation was a bare chip with 5 wires soldered directly to it's pins.

Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"He gave programmers a tool, some were surgeons, some were butchers." - TheDog

He gave programmers a series of tools that would fail BY DESIGN. If you used his tools then your applicaions were guaranteed to fail.

And for that he will always be known as a loathsome intellectual inferior.
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
"No, it is not." - TheDog

I've downloaded dozens of copies, burned them on disk, reviewed the inferior performance of the OS, and then subjected the disks to all manner of ritual distruction.

Cost to me $0.00

You aren't paying for that worthless OS like some kind of sucker, are you?
Vendicar_Decarian
1 / 5 (2) Oct 16, 2011
Correction. 300,000 cores. They must be reducing the number of CPU's by half.
kaasinees
1 / 5 (1) Oct 17, 2011
VD stop being an idiot, you aren't intelligent.

Unix is not free, it costs money. Linux is free but it is not Unix, it is derived from Unix, but it is very different.
In fact all operating systems are practicly derived from Unix, but they have changed so much that you can't call them Unix anymore, even Windows.

You are just trolling this thread, and i wonder what OS you use. Ignorant hypocritical bastard.

http://www.zdnet....soft/459
SteveL
not rated yet Oct 17, 2011
Concerning Ritcie: I don't think anyone wants to put out bad code on purpose. Many times coders throw something togther just to get it working, intending to go back and re-write the code. But at some point the marketing and numbers folks force coders to release their work before it's properly debugged and ready.

I remember back in the early 90's when Simutronics (an on-line gaming company. I was doing part-time development for their GemStone3/4 product.) made 3D cells available to its coders. I had never worked with cells before and neither had my co-workers, but I had the cell data entry, retrival and sorting routines in work-able shape in a few weeks. It took me over a month to tune the code where it worked quickly.

Often when we are under the gun for production we kludge something together - mainly to test other code. Marketing folks don't understand the difference between a kludge and working product until after it is released and customers start screaming.
Vendicar_Decarian
1 / 5 (2) Oct 17, 2011
"Unix is not free, it costs money. Linux is free but it is not Unix, it is derived from Unix, but it is very different." - Kaas

It's all the same tired garbage.

Linux is nothing but a duplication of Unix. All the same tools, utilities, functions, and the same worthless ideology that underlies the thing.

The kernel is different, but the rest of the filth is identical even to the point where the source code being copied directly from SCO Unix and elsewhere and then being restructured in order to dishonestly hide it's origins.

Vendicar_Decarian
1 / 5 (2) Oct 17, 2011
"In fact all operating systems are practicly derived from Unix, ..., even Windows." - Kaas

Is it even possible to say something more stupid?

"Ignorant hypocritical bastard." - Kaas

The benchmarks in your link show that windows is faster than Unix/Linux, although the title - like all Lintard propaganda claimed the opposite.

Was that your intention?

Vendicar_Decarian
1 / 5 (2) Oct 17, 2011
"Concerning Ritcie: I don't think anyone wants to put out bad code on purpose." - SteveL

I have no problem with that. So how do you explain that STDIO to this day still contains the same errors. Was Ritchie just in the toilet for the last 50 years taking a massive dump? So he didn't have enough time to go back and fix is massive number of blunders?

Perhaps he was too busy being patted on the back for his greatness to actually correct his massive number of errors. Errors that even a high school student would be failed for.

What kind of low grade moron is Ritchie for creating and then not correcting a whole series of input and output and conversion functions that fill and empty buffers without any facility in those functions to prevent those buffers from overflowing or underflowing?

There are no bounds checking. None-Zilch-Zip. And as a result, anyone who uses Ritchie's standard IO Library or standard conversion libraries has a program that can be taken down...
Vendicar_Decarian
1 / 5 (2) Oct 17, 2011
by simply typing too long at the keyboard, or passing it a string in a text file that is too long for the buffer in use.

To this day, this MASSIVE FAILURE is being perpetuated.

How long has it been? 50 years, and these worthless proponents of C and C still haven't corrected their massive litany of errors?

Simple death is too good for these worthless peons.

SteveL
not rated yet Oct 17, 2011
It's easy to throw stones at the leader, especially when we don't have to live or work within their limitations. All instructions take time, memory and cycles. All of which were in short supply in the dawn of the computer age.

While I disagree with your assessment about the original product, I do agree with your assessment of later iterations. There is no excuse for not fixing their product, even if they had to step back and start from scratch. Still, I would wish death upon no one, incompetent or not.