New measurement of universe's expansion rate is 'stuck in the middle'

New measurement of universe’s expansion rate is “stuck in the middle”
An impressionistic visualization of what's called the "Tip of the Red Giant Branch," when diagramming the distribution of stars' brightness versus their color. Credit: Meredith Durbin

A team of collaborators from Carnegie and the University of Chicago used red giant stars that were observed by the Hubble Space Telescope to make an entirely new measurement of how fast the universe is expanding, throwing their hats into the ring of a hotly contested debate. Their result—which falls squarely between the two previous, competing values—will be published in the Astrophysical Journal.

Nearly a century ago, Carnegie astronomer Edwin Hubble discovered that the universe has been growing continuously since it exploded into being during the Big Bang. But precisely how fast it's moving—a value termed the Hubble constant in his honor—has remained stubbornly elusive.

The Hubble constant helped scientists sketch out the universe's history and structure and an accurate measurement of it might reveal any flaws in this prevailing model.

"The Hubble constant is the cosmological parameter that sets the absolute scale, size, and age of the universe; it is one of the most direct ways we have of quantifying how the universe evolves," said lead author Wendy Freedman of the University of Chicago, who began this work at Carnegie.

Until now, there have been two primary tools used to measure the universe's rate of expansion. Unfortunately, their results don't agree and the tension between the two numbers has persisted even as each side makes increasingly precise readings. However, is possible that the difference between the two values is due to systemic inaccuracies in one or both methods, spurring the research team to develop their new technique.

One method, pioneered at Carnegie, uses stars called Cepheids, which pulsate at regular intervals. Because the rate at which they pulse is known to be related to their , astronomers can use their luminosities and the period between pulses to measure their distances from Earth.

Credit: Carnegie Institution for Science

"From afar two bells may well appear to be the same, listening to their tones can reveal that one is actually much larger and more distant, and the other is smaller and closer," explained Carnegie's Barry Madore, one of the paper's co-authors. "Likewise, comparing how bright distant Cepheids appear to be against the brightness of nearby Cepheids enables us to determine how far away each of the stars' host galaxies are from Earth."

When a celestial object's distance is known, a measurement of the speed at which it is moving away from us reveals the universe's rate of expansion. The ratio of these two figures—the velocity divided by the distance—is the Hubble constant.

The second method uses the afterglow left over from the Big Bang. Called , it is the oldest light we can see. Patterns of compression in the thick, soupy plasma of which the baby universe was comprised can still be seen and mapped as slight temperature variations. These ripples, documenting the universe's first few moments, can be run forward in time through a model and used to predict the present-day Hubble constant.

The former technique says the expansion rate of the universe is 74.0 kilometers per second per megaparsec; the latter says it's 67.4. If it's real, the discrepancy could herald new physics.

Enter the third option.

The Carnegie-Chicago Hubble Program, led by Freedman and including Carnegie astronomers Madore, Christopher Burns, Mark Phillips, Jeff Rich, and Mark Seibert—as well as Carnegie-Princeton fellow Rachael Beaton—developed a new way to calculate the Hubble constant.

Credit: Carnegie Institution for Science

Their technique is based on a very luminous class of stars called red giants. At a certain point in their lifecycles, the helium in these stars is ignited, and their structures are rearranged by this new source of energy in their cores.

"Just as the cry of a loon is instantly recognizable among bird calls, the peak brightness of a red giant in this state is easily differentiated," Madore explained. "This makes them excellent standard candles."

The team made use of the Hubble Space Telescope's sensitive cameras to search for red giants in nearby galaxies.

"Think of it as scanning a crowd to identify the tallest person—that's like the brightest red giant experiencing a helium flash," said Burns. "If you lived in a world where you knew that the tallest person in any room would be that exact same height—as we assume that the brightest red giant's peak brightness is the same—you could use that information to tell you how far away the tallest person is from you in any given crowd."

Once the distances to these newly found red giants are known, the Hubble constant can be calculated with the help of another standard candle—type Ia supernovae—to diminish the uncertainty caused by the ' relative proximity to us and extend our reach out into the more-distant Hubble flow.

According to the red giant method the universe's expansion rate is 69.8—falling provocatively between the two previously determined numbers.

"We're like that old song, 'Stuck in the Middle with You,'" joked Madore. "Is there a crisis in cosmology? We'd hoped to be a tiebreaker, but for now the answer is: not so fast. The question of whether the standard model of the universe is complete or not remains to be answered."


Explore further

New Hubble constant measurement adds to mystery of universe's expansion rate

More information: The Carnegie-Chicago Hubble Program. VIII. An Independent Determination of the Hubble Constant Based on the Tip of the Red Giant Branch. arxiv.org/abs/1907.05922
Journal information: Astrophysical Journal

Citation: New measurement of universe's expansion rate is 'stuck in the middle' (2019, July 17) retrieved 17 August 2019 from https://phys.org/news/2019-07-universe-expansion-stuck-middle.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
312 shares

Feedback to editors

User comments

Jul 17, 2019
If we believe that our World has started sometimes ago we are still in the position to decide which hypothesis, Lemaître's or Gamow's was closer to reality. There is an opinion that the problems in the standard cosmology could be solved by adjusting of details. Our suggestion is that we have to go back to the conceptions and use the observations accumulated since.
https://www.acade...osmology
https://www.acade...he_World

Jul 17, 2019
This article is badly written. It implies that there is some question about the universe expanding. There isn't. There is only a 4% difference between the different measurements.

Please try not to include articles that make errors like this. Or at least make it clear that physorg hasn't reviewed the article.

Jul 17, 2019
The numbers stated in the article are almost 10% apart.

Jul 17, 2019
If the universe were truly expanding, then the periodicity of cepheid variables would vary directly by a factor of (z+1), according to Einstein's Special Relativity; but, they don't.

Instead we are adding more epicycles.

Jul 18, 2019
Billions are spended on measuring how far off is General relativity theory instead to devote more money on finding theory which should replace it. The main problem of today is the mainly unsubstantiated Einstein personal cult which is preventing to say in peer public that the emperor is naked.

Jul 18, 2019
It implies that there is some question about the universe expanding. There isn't. There is only a 4% difference between the different measurements.


That's a huge problem, because the expansion is supposed to be accelerating. That means it's an exponential function, and the error grows exponentially as well when you try to extrapolate backwards or forwards in time. For example, 1.04^100 is close to 19 times greater than 1.01^100 even though the coefficient is less than 4% off.

Over billions of years, even tiny errors in the expansion coefficients can lead to vastly different outcomes, and that's a problem for cosmology because these errors mask interesting events like the inflation period of the universe.

Jul 18, 2019
How about a more realistic model? One that includes action/reaction? And distance between the objects doing the gravitating?

https://youtu.be/4goInwbOix4

Jul 18, 2019
However, is possible that the difference between the two values is due to systemic inaccuracies in one or both methods,


Duh. Hey, maybe it's not an "initial value" problem in the first place. There's no common sense in the entire universe the size of an atom anyway, correct or not.

Jul 19, 2019
FWIW, the old and many new measurements are currently converging in the 68-70 span whether or not global (observable universe) or local (within the last few Gyrs). I have posted this before, but it need to be said on the recent measurements of Hubble parameter at current universe age [km*s^-1*Mpc^-1]:

Planck CMB [2018] consensus [Planck w/dust+lowE+lensing+BAO]* 67.66 +0.42/-0.42
GW170817 star merger [2019] consensus [GW+EM+VLBI+LC+PLJ]** 68.1 +4.5/-4.3
CMASS cosmic voids [2019] consensus [Planck+LOWZ+CMASS+voids] 67.71 +0.43/-0.43
CCHP tip of red giant branch [2019] LMC calibration [TRGB (LMC)] 69.8 +2.5/-2.5

References have been posted here on phys.org the last few weeks. The jet model is picked to cover parameter space, the jet space (HD model) value is 70.3. The last 3 observations are independent, the first is (an older) consensus but uses two independent but agreeing data sets (spot size, spot polarization) derived from CMB.

Jul 19, 2019
It implies that there is some question about the universe expanding. There isn't. There is only a 4% difference between the different measurements.


That's a huge problem, because the expansion is supposed to be accelerating.


Yes, after dark energy started to dominate, and it is.

The current exponential expansion does not add uncertainty to type of outcome, but to model selection.

The problem is when the consensus cosmology fails. The flatness of space is safe, but say dark energy as vacuum energy (i.e. constant energy density) is not. The Planck paper, after checking the parameter space: "Simple extensions that can partially resolve these tensions are not favoured by the Planck data. ... None of the extended models that we have studied in this paper convincingly resolves the tension .... no compelling evidence to favour any of the extended models considered in this paper." [ https://arxiv.org...6209.pdf ].

The span seen here works; Cepheids do not.

Jul 19, 2019
What do you mean by claiming 1.04x10^100 is "19 times greater" than 1.01x10^100?

It's not. You have a calculator on your computer. Use it. Very simple stuff.

1.04x10^100/1.00x10^100 = 1.04. At least on my calculator. Do it yourself to confirm I'm right.

Jul 19, 2019
"The problem is when the consensus cosmology fails."

By the way, that is relatively easy to quantify against the Planck consensus. With the 2018 data a current Hubble rate of 68.5 km*s^-1*Mpc^-1 would be odd (5 % likelihood), one with 70 would be problematic (3 sigma theory threat).

However, there is some rubber in the consensus model when more data is put in, and it is still not a serious problem with 94 km*s^-1*Mpc^-1 because that observation is within its own 5 sigma region (as of yet). So we have one (or, I guess, several like it) odd data sets, but nothing odd with the consensus as such - yet.

The likelihood is that, like the data series on the universal speed limit once, these data sets will eventually converge (perhaps even jump re the consensus). Not only because of lack of compelling reason why not, but because they are in the process of doing so (CMB consensus wants to move up, the new methods wants to go much lower down than the extreme Cepheids).

Aug 02, 2019
It's not. You have a calculator on your computer. Use it. Very simple stuff.


You are multiplying the base number by 10^100 because you inserted an x10 in there that wasn't in the original for no discernible reason.

Then you did it again by changing 1.01 to 1.00 for no reason. You're calculating an entirely different comparison.

Calculate (1.04^100) / (1.01^100) and you get 18.67 which rounds up to 19
The point is that even over a short period of time, such as 100 intervals, a small difference like that results in a great difference in the predicted outcome. That's why the scientists are trying to hone down the cosmological constants of nature down to something like 12 digits of precision. Admitting a 4% margin of error is basically saying "We have absolutely no idea".


Please sign in to add a comment. Registration is free, and takes less than a minute. Read more