# Supercomputers crack sixty-trillionth binary digit of Pi-squared

Australian researchers have done the impossible -- they’ve found the sixty-trillionth binary digit of Pi-squared! The calculation would have taken a single computer processor unit (CPU) 1,500 years to calculate, but scientists from IBM and the University of Newcastle managed to complete this work in just a few months on IBM's "BlueGene/P" supercomputer, which is designed to run continuously at one quadrillion calculations per second.

Their work was based on a mathematical formula discovered a decade ago in part by the Department of Energy's David H. Bailey, the Chief Technologist of the Computational Research Department at the Lawrence Berkeley National Laboratory. The Australian team took Bailey’s program, which ran on a single PC processor, and made it run faster and in parallel on thousands of independent processors.

"What is interesting in these computations is that until just a few years ago, it was widely believed that such mathematical objects were forever beyond the reach of human reasoning or machine computation," Bailey said.

"Once again we see the utter futility in placing limits on human ingenuity and technology."

A binary digit or "bit" is the “DNA” of all computing. In a computer, everything is represented as strings of zeroes and ones. The decimal number 12, for instance, is represented as "1100," and the fraction 9/16 is represented as “0.1001.” So as one might imagine, calculating the sixty-trillionth binary digit of a number is quite a feat.

According to Professor Jonathan Borwein of the University of Newcastle, this work represents the largest single computation done for any mathematical object to date. The idea for this project sparked when IBM Australia was looking for something to do related to "Pi Day" (March 14) on a new IBM BlueGene/P computer system. Borwein proposed running Bailey’s formula for Pi-squared, as the calculation had been done for Pi itself. The team also calculated Catalan’s constant, another important number that arises in mathematics.

**Why Pi?**

The importance of Pi has long been known -- multiply it by the diameter of any circle to get the circumference. Ancient Egyptians used this number in their design of the pyramids, meanwhile ancient scholars in Jerusalem, India, Babylon, Greece and China used this proportions in their studies of architecture and symbols.

Yet despite its longevity, Pi is one of the most mysterious numbers in mathematics. Because it is "irrational," Pi can never be expressed as a finite decimal number and humanity will never have anything but approximations of it. So why bother solving Pi to the ten trillionth decimal unit? After all, a value of Pi to 40 digits would be more than enough to compute the circumference of the Milky Way galaxy to an error less than the size of a proton.

According to Bailey, one application for computing the digits of Pi is to test the integrity of computer hardware and software, which is a focus of Bailey’s research at Berkeley Lab. “If two separate computations of digits of Pi, say using different algorithms, are in agreement except perhaps for a few trailing digits at the end, then almost certainly both computers performed trillions of operations flawlessly,” he says.

For example in 1986, a Pi-calculating program that Bailey wrote at NASA, using an algorithm due to Jonathan and Peter Borwein, detected some hardware problems in one of the original Cray-2 supercomputers that had escaped the manufacturer’s tests. Along this same line, some improved techniques for computing what is known as the fast Fourier transform on modern computer systems had their roots in efforts to accelerate computations of Pi. These improved techniques are now very widely employed in scientific and engineering applications. And of course, from a mathematical perspective it’s just plain fascinating to see the digits of Pi in action!

Explore further

**More information:**You can read more about Pi on Bailey’s blog. And you can find more about the Berkeley Lab Computing Sciences here.

**Citation**: Supercomputers crack sixty-trillionth binary digit of Pi-squared (2011, April 29) retrieved 18 June 2019 from https://phys.org/news/2011-04-supercomputers-sixty-trillionth-binary-digit-pi-squared.html

## User comments

robusoaxemasterErrrm... You do realize that they aren't ACTUALLY represented as ones and zeroes right? I mean, the computer sees a bunch of voltage spikes - there's no numbers flying down the wires. Just thought I should mention that, since your analogy is completely wrong anyway. No computer engineer with a clue would ever say such a thing.

So if these guys used just one computer to evaluate Pi, and there's a potential for error in the calculation, then how do they know the answer is accurate?

Fig1024gurlocthalesI bet it makes your day when you find a word misspelled.

axemasterNot exactly. I do however take issue with horribly misleading explanations of extremely simple things though, especially on a science website.

Is it so wrong to expect a bare minimum of competence?

patzernot only are you incorrect by saying the quoted line is wrong, but then you use common terminology incorrectly. voltage spikes?? do you know anything about CMOS?? you sound clueless.

snwboardnI don't think a breakdown of arithmetic and logic gates was necessary to get the point of this article across...

Megadeth312You can't simply draw a perfect circle on the scale that the Egyptians used them. . . and for that matter, discovering and demonstrating pi to only a few digits is remarkably easy.. assuming the only race to be able to build such grand, well engineered structures could not have discovered pi is asinine.

SmellyhatOne draws a circle by moving a fixed length (a rope, at full extension) around a fixed point (something in the ground that the rope is tied to). The accuracy of a circle drawn by this means *increases* as the circle grows larger. I can only imagine you to be claiming that the Egyptians were using microscopic circles.

The Egyptians may or may not have known that there was a fixed, constant ratio between the circumference of a circle and its radius. It's entirely plausible that they did. They didn't need to, though, to build pyramids. Only debate is to whether the evidence of the ratio the pyramids is deliberate or natural product of method and geometry itself.

bluehighBinary based computations do not have implict decimal points and 1001 is decimal 9 and even if a place is assumed then 0.9

Any one care to explain how 0.1001. becomes a representation of the fraction 9/16 which as a decimal is 0.5625 and therefore more likely represented as 01010111111001 in binary. The alternative is the implication that because in a 4 bit word you can represent 16 numbers (0 included) then 1001 is 9 out of the 16 BUT this would be hexadecimal and not binary.

RobertKarlStonjekbluehighNah ... the math and base conversion does not even come close to being sensible.

SmaryJerryMrV_The binary number with its binary point, 1111.1111, can be written as follows using decimal fractons:

1111.1111 = 1*2^3 + 1*2^2 + 1*2^1 + 1*2^0 + 1*2^-1 + 1*2^-2 + 1*2^-3 + 1*2^-4.

= ... 8 + 4 + 2 + 1 + 1/2 + 1/4 + 1/8 + 1/16....

So 1/16 in binary is actually 0000.0001. While the other place values have to be zero, the

fourth position to the right of the "binary" point is the 1/16's place, has to be 1.

If this was a decimal number, the fourth position to the right of the decimal point would be

the 10,000ths place (1/10000 = 1/10^4).

So the binary equivalents of the 16ths are listed below:

1/16 = 0.0001

2/16 = 0.0001 + 0.0001 = 0.0010

3/16 = 0.0010 + 0.0001 = 0.0011

4/16 = 0.0011 + 0.0001 = 0.0100

5/16 = 0.0100 + 0.0001 = 0.0101

6/16 = 0.0101 + 0.0001 = 0.0110

7/16 = 0.0110 + 0.0001 = 0.0111

8/16 = 0.0111 + 0.0001 = 0.1000

9/16 = 0.1000 + 0.0001 = 0.1001

antialiasExactly.

Having such a long sequence of bits for pi might still be interesting. We could search it for statistical anomalies. There probably aren't any - but you neve know until you look.

bluehighIt is Binary Coded Hexadecimal that allows you to represent fractions in such a manner, just as computational results from calculations are most useful for human interpertation when represented as Binary Coded Decimal.

Far be it though for me to nitpick math definitions when presented with such an outstanding article on the important testing of reliability in supercomputers by the clearly well funded Uni of Newcastle.

Anyone know if Newcastle hospital got the much needed childrens facilties it has been begging for?

docjapearohatech1Megadeth312One cannot draw a circle of that scale by hand. Sure you can draw one on paper that is fairly accurate, using the method you described, however then taking those points and extrapolating them to the size of the relationships used would introduce noise. Using the method you describe would require drawing circles the size of the pyramids... besides that, they themselves describe the methodology used, and none of it describes using giant lengths of rope to do calculations.

SmellyhatI have no particular interest in whether or not the Egyptians were familiar with the notion of , but please clarify what sort of scale you're talking about, and how you believe the Egyptians did calculations.

GrizzledSteveLLet's see; Research in an attempt to save lives or solve Pi to the 60 trillionth bit... Guess we know which they think is important.

Quantum_ConundrumYeah.

Just shows how screwed up people's priorities are in this life.

These idiots should be fired/defunded.

MorituriMaxProbably less than the price of the energy you expend drawing breath.

It helps them check their hardware for one thing, to see if any errors have crept in. Which in turn allows those computers to accurately design stuff that will be worth much more than the price for the computer and the power it uses, example: predicting the weather, or the paths of hurricanes and tornadoes to save lives.

Did you even read the article?

winthromantialiasQuick intro to super computers:

1) To do something that uses the full power of a super computer requires specifically written software.

2) That software doesn't just appear - development takes (man) years. Especially when we're dealing with complex simulations.

3) During that time there are periods where the machine is idle. Especially during testing/debugging

4) 'Simple' programs (like the one described) can easily be run in those times. They get stopped whenever the higher priority work is done and resume when that is interrupted (for whatever reason....evaluation, debugging, tweaking, ... )

Think of it like SETI@home. No other 'useful' work had to suffer delays because of this.

SteveLYes, SETI@home another example of wasted resources. In my opinion the only thing useful out of that project is the distributed computing techniques developed at Berkeley. SETI@home did introduce me to distributed computing and I contribute to this day (but not for SETI).

Do you really think that testing to the 60 trillionth bit proves the equipment will provide any more usefully valid results than the 1 trillionth bit? I know the math. I'm talking about actually useful accuracy. After a point you're only doing it for the record - not really to accomplish anything.

I know a bit about coding. I was a coder/developer for 6 years. I enjoyed the challenges. Among other things I developed code for system load balancing and code for dynamic data handling in 3D cells including the subscripts for input, output, sorting, editing, etc. If it paid well enough I'd have likely stuck with it.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more