Halos Gone MAD

Apr 08, 2011 By Jon Voisey
Distribution of dark matter when the Universe was about 3 billion years old, obtained from a numerical simulation of galaxy formation. The left panel displays the continuous distribution of dark matter particles, showing the typical wispy structure of the cosmic web, with a network of sheets and filaments, while the right panel highlights the dark matter halos representing the most efficient cosmic sites for the formation of star-bursting galaxies with a minimum dark matter halo mass of 300 billion times that of the Sun. Credit: VIRGO Consortium/Alexandre Amblard/ESA

One of the successes of the ΛCDM model of the universe is the ability for models to create structures of with scales and distributions similar to those we view in the universe today. Or, at least that’s what astronomers tell us. While computer simulations can recreate numerical universes in a box, interpreting these mathematical approximations is a challenge in and of itself. To identify the components of the simulated space, astronomers have had to develop tools to search for structure. The results has been nearly 30 independent computer programs since 1974. Each promises to reveal the forming structure in the universe by finding regions in which dark matter halos form. To test these algorithms out, a conference was arranged in Madrid, Spain during the May of 2010 entitled “Haloes going MAD” in which 18 of these codes were put to the test to see how well they stacked up.

Numerical simulations for universes, like the famous Millennium Simulation begin with nothing more than “particles”. While these were undoubtedly small on a cosmological scale, such particles represent blobs of with millions or billions solar masses. As time is run forwards, they are allowed to interact with one another following rules that coincident with our best understanding of physics and the nature of such matter. This leads to an evolving universe from which astronomers must use the complicated codes to locate the conglomerations of dark matter inside which galaxies would form.

One of the main methods such programs use is to search for small overdensities and then grow a spherical shell around it until the density falls off to a negligible factor. Most will then prune the particles within the volume that are not gravitationally bound to make sure that the detection mechanism didn’t just seize on a brief, transient clustering that will fall apart in time. Other techniques involve searching other phase spaces for particles with similar velocities all nearby (a sign that they have become bound).

To compare how each of the algorithms fared, they were put through two tests. The first, involved a series of intentionally created dark matter halos with embedded sub-halos. Since the particle distribution was intentionally placed, the output from the programs should correctly find the center and size of the halos. The second test was a full fledged universe simulation. In this, the actual distribution wouldn’t be known, but the sheer size would allow different programs to be compared on the same data set to see how similarly they interpreted a common source.

In both tests, all the finders generally performed well. In the first test, there were some discrepancies based on how different programs defined the location of the halos. Some defined it as the peak in density, while others defined it as a center of mass. When searching for sub-halos, ones that used the phase space approach seemed to be able to more reliably detect smaller formations, yet did not always detect which particles in the clump were actually bound. For the full simulation, all algorithms agreed exceptionally well. Due to the nature of the simulation, small scales weren’t well represented so the understanding of how each detect these structures was limited.

The combination of these tests did not favor one particular algorithm or method over any other. It revealed that each generally functions well with regard to one another. The ability for so many independent codes, with independent methods means that the findings are extremely robust. The knowledge they pass on about how our understanding of the universe evolves allows astronomers to make fundamental comparisons to the observable universe in order to test the such models and theories.

The results of this test have been compiled into a paper that is slated for publication in an upcoming issue of the Monthly Notices of the Royal Astronomical Society.

Explore further: Better thermal-imaging lens from waste sulfur

add to favorites email to friend print save as pdf

Related Stories

Dark matter is held together by 'attractors'

Aug 10, 2010

The universe consists of a large amount of invisible matter - dark matter. We do not know what it is, but we know that it is there and that without dark matter there would be no galaxies, and hence stars, ...

Signs of dark matter may point to mirror matter candidate

Apr 27, 2010

(PhysOrg.com) -- Dark matter, which contains the "missing mass" that's needed to explain why galaxies stay together, could take any number of forms. The main possible candidates include MACHOS and WIMPS, but there is no shortage ...

Recommended for you

Could 'Jedi Putter' be the force golfers need?

6 hours ago

Putting is arguably the most important skill in golf; in fact, it's been described as a game within a game. Now a team of Rice engineering students has devised a training putter that offers golfers audio, ...

Better thermal-imaging lens from waste sulfur

21 hours ago

Sulfur left over from refining fossil fuels can be transformed into cheap, lightweight, plastic lenses for infrared devices, including night-vision goggles, a University of Arizona-led international team ...

User comments : 7

Adjust slider to filter visible comments by rank

Display comments: newest first

Mahal_Kita
3 / 5 (4) Apr 08, 2011
Ah.. Raisonner pour le besoin de la cause my French girlfriend used to say.. When you construct a model, you can't eliminate what you expect to find. When you eliminate that, the model is of no use. And when you do that, the model projects exactly what you expect it will. Please, please, please.. Learn from my girlfriend of so many years ago.. As I did.
hush1
not rated yet Apr 08, 2011
O.k. One question is begging. What do you do now? Or what did you do then?
If you assume nothing, then scientific inquiry or methodology is excluded.
Mahal_Kita
3 / 5 (2) Apr 09, 2011
Hey hush.. Clearly you did not understand my point. What happened here is a 'double tap' because the mathematics involved is the mathematics that produced the concept of Dark Matter. Yet no other mathematical reasoning has produced Dark Matter, only the lack of sufficient matter to explain certain phenomena.
hush1
not rated yet Apr 09, 2011
Thirty computer programs and counting. I can understand the desire for an explanation to all the observational data.

I don't even know if the 'tools' (mathematics) to get the 'job' done (explain data without overhauling existing physics) is sufficient.

Yet no other mathematical reasoning has produced Dark Matter, only the lack of sufficient matter to explain certain phenomena.


Another-words, all other 'tools'("no other mathematical reasoning") yields 'Dark Matter' as an explanation.

The worst case scenario is when we look at the data and no other explanation or 'tool' other than 'Dark Matter' is imagined.

Of course, the 'job' is clear. To provide 'mechanics' for the observational data.

Most 'objects' in our human understanding are recognizable.
Even abstract explanations appeal to our perception and to our data, eventually.

So, the rest of the 'tools'(mathematics) point to "only the lack of sufficient matter".

cont...
hush1
not rated yet Apr 09, 2011
In retrospect, humans were presumptuous and premature to lay claim to ALL properties of NORMAL matter. There is more to normal matter than meets our present scientific understanding to - "explain certain phenomena".

I have made an attempt to understand the point you felt I missed. I feel I have made inroads into the insight of the point you felt I missed.

If not, you probably will be the first to let me know, that I am still missing your point. :)
Mahal_Kita
not rated yet Apr 13, 2011
I have made an attempt to understand the point you felt I missed. I feel I have made inroads into the insight of the point you felt I missed.

If not, you probably will be the first to let me know, that I am still missing your point. :)


Your assumption could very well be true <evil grin> But I valued your input as it made my (ad hoc) point clearer to me ;-)
Parsec
5 / 5 (2) Apr 13, 2011
Do not make the mistake of over generalizing what this article is saying. We have specific inputs, which in this case represent a collection of various sized gravitationally bound 'clumps of stuff'. Calling this dark matter is just semantics. The tests were to see how the algorithms performed given those mathematical inputs. Nothing more. The fact that all of them preformed very well simply means that simulations based on those algorithms can be counted on to predict future distributions from the input ones. Actually, thats not quite true. It means that future trend analysis is basically the same for all the algorithms, since they did not define what it meant to perform extremely well, and how they actually tracked the data to insure that was true. How did they measure it to say it worked extremely well? It may be true that all of them failed specularity but they all failed in exactly the same way. VERY unlikely with 18 algorithms, but possible.

More news stories

Could 'Jedi Putter' be the force golfers need?

Putting is arguably the most important skill in golf; in fact, it's been described as a game within a game. Now a team of Rice engineering students has devised a training putter that offers golfers audio, ...

LinkedIn membership hits 300 million

The career-focused social network LinkedIn announced Friday it has 300 million members, with more than half the total outside the United States.

Magnitude-7.2 earthquake shakes Mexican capital

A powerful magnitude-7.2 earthquake shook central and southern Mexico on Friday, sending panicked people into the streets. Some walls cracked and fell, but there were no reports of major damage or casualties.

Sun emits a mid-level solar flare

The sun emitted a mid-level solar flare, peaking at 9:03 a.m. EDT on April 18, 2014, and NASA's Solar Dynamics Observatory captured images of the event. Solar flares are powerful bursts of radiation. Harmful ...