Supercomputing the difference between matter and antimatter

Mar 29, 2012
This diagram illustrates the wide range of distance scales that must be understood before the kaon-decay calculation can be performed. The lowest layer is a picture showing the tracks of the decay particles as they move through the liquid hydrogen of a “bubble chamber” — a kind of particle detector used in the 1950s and 60s. The next layer is a diagrammatic interpretation of what’s happening in the bubble-chamber picture — how the kaon (K) is produced and “breaks apart” to form two other particles: the positive pion (π+) and negative pion (π -). This process happens on the familiar scale of a fraction of a meter. The next scale of a few femtometers is shown on the third layer, where the lattice of points and paths represents the supercomputer calculation, which takes into account the binding of quarks and antiquarks as they form the particles being studied. Finally the top layer shows what is known as a Feynman diagram of the shortest scale — 1/1000 of a femtometer — the scale at which a quark undergoes a sort of metamorphosis from one flavor into another.

(PhysOrg.com) -- An international collaboration of scientists has reported a landmark calculation of the decay process of a kaon into two pions, using breakthrough techniques on some of the world's fastest supercomputers. This is the same subatomic particle decay explored in a 1964 Nobel Prize-winning experiment performed at the U.S. Department of Energy's Brookhaven National Laboratory (BNL), which revealed the first experimental evidence of charge-parity (CP) violation — a lack of symmetry between particles and their corresponding antiparticles that may hold the answer to the question "Why are we made of matter and not antimatter?"

The new research — reported online in Physical Review Letters March 30, 2012 — helps nail down the exact process of kaon decay, and is also inspiring the development of a new generation of supercomputers that will allow the next step in this research.

"The present calculation is a major step forward in a new kind of stringent checking of the Standard Model of particle physics — the theory that describes the fundamental particles of matter and their interactions — and how it relates to the problem of matter/antimatter asymmetry, one of the most profound questions in science today," said Taku Izubuchi of the RIKEN BNL Research Center and BNL, one of the members of the research team publishing the new findings. "When the universe began, did it start with more particles than antiparticles, or did it begin in a symmetrical way, with equal numbers of particles and antiparticles that, through CP violation or a similar mechanism, ended up with more matter than antimatter?"

Either way, the universe today is composed almost exclusively of matter with virtually no to be found.

Scientists seeking to understand this asymmetry frequently look for subtle violations in predictions of processes described by the Standard Model. One property of these processes, CP symmetry, can be explored by comparing two particle decays — the decay of a particle observed directly and the decay of its anti-particle, viewed in mirror reflection. "C" refers to the exchange of a particle and its antiparticle (which is exactly the same but with opposite charge). "P" specifies the mirror reflection of this decay. But as the Nobel Prize-winning experiments showed, the two decays are not always symmetrical: In some cases you end up with extra particles (matter) and CP symmetry is "violated."

Exploring the precise details of the kaon decay process could help elucidate how and why this happens.

Supercomputing the decay process

The new calculation of one aspect of this decay, which required creating unique new computer techniques to use on some of the world's fastest supercomputers, was carried out by physicists from Brookhaven National Laboratory, Columbia University, the University of Connecticut, the University of Edinburgh, the Max-Planck-Institut für Physik, the RIKEN BNL Research Center (RBRC), the University of Southampton, and Washington University. The calculation builds upon extensive theoretical studies done since the first 1964 experiment and much more recent experiments done at CERN, the European particle physics laboratory, and at Fermi National Accelerator Laboratory.

The unprecedented accuracy of the measured experimental values — which incorporate distances as minute as one thousandth of a femtometer (one femtometer is 1/1,000,000,000,000,000th of a meter, the size of the nucleus of a hydrogen atom) — allowed the collaboration to follow the process in extreme detail: the decay of individual quarks (the subatomic components of many Standard Model particles) and the flitting in and out of existence of other subatomic particles. Viewing the picture from farther away — a few tenths of a femtometer — this basic process is obscured by a sea of quark-antiquark pairs and a cloud of the gluons that hold them together. At this distance, the gluons begin to bind the quarks into the observed particles. The last part of the problem is to show the behavior of the quarks as they orbit each other, moving at nearly the speed of light through a swarm formed from gluons and further pairs of quarks and antiquarks, and at last forming the pions of the decay under study.

To "translate" the mathematics needed to describe these interactions into a computational problem required the creation of powerful numerical methods and advances in technology that made possible the present generation of massively parallel supercomputers with peak computational speeds of hundreds of teraflops. (A teraflop computer can perform one million million operations per second).

The actual kaon decay described by the calculation spans distance scales of nearly 18 orders of magnitude, from the shortest distances of one thousandth of a femtometer — far below the size of an atom, within which one type of quark decays into another — to the everyday scale of meters over which the decay is observed in the lab. This range is similar to a comparison of the size of a single bacterium and the size of our entire solar system.

The collaboration carried out the computation using the methods of lattice quantum chromodyamics (QCD — the theory that describes fundamental quark-gluon interactions), in which the decay is "imagined" as taking place within a lattice or grid of space-time points that can be entered into a computer. Then, the quantum fluctuations of the decay are calculated by a statistical method called the "Monte Carlo" method, which provides the most likely of the fluctuations as a result. The calculation required 54 million processor hours on the IBM BlueGene/P supercomputer installed at the Argonne Leadership Class Facility (ALCF) at Argonne National Laboratory. Earlier calculations were also done on Brookhaven's QCDOC (for QCD on a chip) supercomputer, a prototype for IBM's BlueGene series.

This calculation, when compared with predictions from the Standard Model, allows the scientists to determine another remaining unknown quantity important to understanding kaon decay and its relation to CP violation. A direct calculation of this remaining unknown quantity and a higher precision recalculation of the present result will be the focus of future research, requiring even more computing power.

"Fortunately," says co-author Peter Boyle of the University of Edinburgh, "the next generation of IBM supercomputers is being installed over the next few months in many research centers around the world, including the ALCF, the University of Edinburgh, the KEK laboratory in Japan, Brookhaven Lab, and the RBRC."

These new IBM BlueGene/Q machines are expected to have 10 to 20 times the performance of the current machines, Boyle explained. "With this dramatic boost in computing power we can get a more accurate version of the present calculation, and other important details will come within reach," he said. "This is a nice synergy between science and the computer — the science pushing computer developments and the advanced computers pushing science forward, to the benefit of the science community and also the commercial world."

The calculations were performed under the U.S. Department of Energy’s (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program on the Intrepid BlueGene/P supercomputer in ALCF at Argonne National Laboratory and on the Ds Cluster at Fermi National Laboratory, computer resources of the USQCD Collaboration. Part of the analysis was performed on the Iridis Cluster at the University of Southampton and the DiRAC facility in the UK. The research was supported by DOE’s Office of Science, the U.K’s Science and Technology Facilities Council, the University of Southampton, and the RIKEN Laboratory in Japan.

Explore further: Experiment with speeding ions verifies relativistic time dilation to new level of precision

Related Stories

Recommended for you

Uncovering the forbidden side of molecules

1 hour ago

Researchers at the University of Basel in Switzerland have succeeded in observing the "forbidden" infrared spectrum of a charged molecule for the first time. These extremely weak spectra offer perspectives ...

How Paramecium protozoa claw their way to the top

Sep 19, 2014

The ability to swim upwards – towards the sun and food supplies – is vital for many aquatic microorganisms. Exactly how they are able to differentiate between above and below in often murky waters is ...

User comments : 9

Adjust slider to filter visible comments by rank

Display comments: newest first

Yevgen
5 / 5 (2) Mar 29, 2012
What is the conclusion of the research? There is a lot of talk of
how it is done, but no actual conclusion like "standard model is found to exactly match the experiment", or CP violation constant is
X.XXXXX...
What was there result - we did the calculation?
Sean_W
1 / 5 (1) Mar 29, 2012
What is the conclusion of the research? There is a lot of talk of
how it is done, but no actual conclusion like "standard model is found to exactly match the experiment", or CP violation constant is
X.XXXXX...
What was there result - we did the calculation?


If I read the article corectly--and that's a big if--they found a charge parity violation that wasn't specifically designed into the simulation and might help describe the real world violation(s) that resulted in more matter than Antimatter.

And from the sound of it, the violation appeared in a simulation of a static volume which might be bad news for some ideas I have heard of that ascribe such symmetry to the effect of rotation in the universe (and later in galaxies).
Callippo
1.2 / 5 (6) Mar 29, 2012
What was there result - we did the calculation?
Yep, and we spend some money for it. We would require another ones.. The physicists still didn't apparently realize, the publics needs to know, for which its money are spend. The link to fifty years old experiment is provided, but not further reference to the actual work is given. For what such reporting is good for, after then? For propaganda of mainstream physics?
Urgelt
not rated yet Mar 29, 2012
Well and good, I love simulations. But my curiosity propels a question this simulation doesn't address: what are the gravitational properties of antimatter particles?

If we assume that antimatter has attractive gravity, then failure to see the signatures of antimatter in luminous objects means there can't be much antimatter out there. But if antimatter has repulsive gravity, then it's not going to clump into luminous matter, and all bets are off. Could be quite a lot of it out there influencing the macro features of the universe.

Inquiring minds wish to know.
gromm1t2
not rated yet Mar 31, 2012
"for what such reporting is good for , after then??"
I hate entry into "flaming " but, if one doesn't grasp what an enormous scientific achievement this short report summarizes, perhaps one should reread and study, line by line, detail by comment, looking up each reference and word regarding which confusion might occur studying for at least a week or so before comment on a work that many brilliant people have spent years on. Just a thought.
Iourii Gribov
1 / 5 (2) Mar 31, 2012
Dear Taku Izubuchi, you says - the problem of matter/antimatter asymmetry, is one of the most profound questions in science today. I argue for its exact symmetry in the Gribov Periodical Multiverse (GPM), periodical (matter/antimatter = gravity / antigravity) Hyperbook, uniting basic physical laws (SR&QM&SUSY& superfluid weightless vacuum, etc). The GPM explains basic miracles in modern physics (e-Journal published, Humboldt Univ. http://www2.hu-be...ibov.pdf ): (1) the interconnected nature of Dark Energy (DE) and Dark Matter (DM) & the flatness of our Universe/Multiverse & the accelerating expansion & the bubble large-scale structure, with the estimated theoretical ratio DE/(DM + Ordinary Matter) ~74%/26% - very near to the recently done measurements. The DE&DM, etc data work as evidences for the GPM); (2) No Higgs, No el. sparticles; (3) Antigravity CERN and Mills lab prediction for antimatter!
Cynical1
1.3 / 5 (3) Apr 01, 2012
Wouldn't anti-matter have anti-visibility?
denijane
not rated yet Apr 03, 2012
Erm, I also couldn't find the results of the study in this article. It poses the problem, describes the methods, the computers and so on, but the results if any are extremely hard to find. It looks more like an article dedicated to the computers, than to the actual physics. Which is a shame, since the science is really fundamental.
Cynical1
1 / 5 (1) Apr 04, 2012
Denijane,
I guess that would bring us to the conclusion that this is an anti-article. We can see the theory evidence - but not the results...