Uncertainty revisited: Novel tradeoffs in quantum measurement

May 28, 2013 by Stuart Mason Dambrot feature
Error-tradeoff and error-disturbance relations. Copyright © PNAS, doi:10.1073/pnas.1219331110

(Phys.org) —There is, so to speak, uncertainty about uncertainty – that is, over the interpretation of how Heisenberg's uncertainty principle describes the extent of disturbance to one observable when measuring another. More specifically, the confusion is between the fact that, as Heisenberg first intuited, the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, and the fact that on the other hand the indeterminacy of the outcomes when either one or the other observable is measured is bounded. Recently, Dr. Cyril Branciard at The University of Queensland precisely quantified the former by showing how it is possible to approximate the joint measurement of two observables, albeit with the introduction of errors with respect to the ideal measurement of each. Moreover, the scientist characterized the disturbance of an observable induced by the approximate measurement of another one, and derived a stronger error-disturbance relation for this scenario.

Dr. Branciard describes the research and challenges he encountered. " tells us that certain are incompatible and cannot be performed jointly," Branciard tells Phys.org. For example, he illustrates, it is impossible to simultaneously measure the position and speed of a , the spin of a particle in different directions, or the polarization of a photon in different directions.

"Although such joint measurements are forbidden," Branciard continues, "one can still try to approximate them. For instance, one can approximate the joint measurement of the spin of a particle in two different directions by actually measuring the spin in a direction in between. At the price of accepting some errors; this yields partial information on the spin in both directions – and the larger the precision is on one direction, the larger the error on the other must be." While it's challenging to picture what it means to measure a property "in between position and speed," he adds, it's possible to measure something that will give partial information on both the position and speed – but again, the more precise the position is measured, the less precise the speed, and vice versa.

There is therefore a tradeoff between precision achievable for each incompatible observable, or equivalently on the errors made in their approximations. What exactly is this tradeoff? How well can one approximate the joint measurement? What fundamental limits does quantum theory precisely impose? This tradeoff – between the error on one observable versus the error on the other – can be characterized by so-called error-tradeoff relations, which show that certain values of errors for each observable are forbidden.

"Certain error-tradeoff relations were known already, and set bounds on the values allowed," Branciard explains. "However, it turns out that in general those bounds could not be reached, since quantum theory actually restricts the possible error values more than what the previous relations were imposing." In his paper, Branciard derives new error-tradeoff relations which are tight, in the sense that the bounds they impose can be reached when one chooses a "good enough" approximation strategy. He notes that they thus characterize the optimal tradeoff one can have between the errors on the two observables.

Branciard points out that the fact that the joint measurement of incompatible observables is impossible was first realized by Heisenberg in 1927, when, in his seminal paper, he explained that the measurement of one observable necessarily disturbs the other, and suggested an error-disturbance relation to quantify that. "General uncertainty relations were soon to be derived rigorously," Branciard continues.

More specifically, the uncertainty relation known as the uncertainty principle or Heisenberg principle is a mathematical inequality asserting that there is a fundamental limit to the precision with which certain pairs of physical properties of a particle known as complementary variables, such as a particle's position and momentum, can be known simultaneously. In the case of position and momentum, the more precisely the position of a particle is determined, the less precisely its momentum can be known, and vice versa.

"However," Branciard notes, these "standard" uncertainty relations quantify a different aspect of Heisenberg's uncertainty principle: Instead of referring to the joint measurement of two observables on the same physical system – or to the measurement of one observable that perturbs the subsequent measurement of the other observable on the same system, as initially considered by Heisenberg – standard uncertainty relations bound the statistical indeterminacy of the measurement outcomes when either one or the other observable is measured on independent, identically prepared systems."

Constraints imposed by error-tradeoff and error-disturbance relations. Copyright © PNAS, doi:10.1073/pnas.1219331110

Branciard acknowledges that there has been, and still is, confusion between those two versions of the uncertainty principle – that is, the joint measurement aspect and the statistical indeterminacy for exclusive measurements – and many physicists misunderstood the standard uncertainty relations as implying limits on the joint measurability of incompatible observables. "In fact," he points out, "it was widely believed that the standard uncertainty relation was also valid for approximate joint measurements, if one simply replaces the uncertainties with the errors for the position and momentum. However, this relation is in fact in general not valid."

Surprisingly little work has been done on the joint measurement aspect of the uncertainty principle, and it has been quantified only in the last decade when Ozawa1 derived the first universally valid trade-off relations between errors and disturbance – that is, valid for all approximation strategies for the joint measurement error-tradeoff relations for joint measurements. "However," says Branciard, "these relations were not tight. My paper presents new, stronger relations that are. In order to quantify the uncertainty principle for approximate joint measurements and derive error-tradeoff relations," he adds, "one first needs to agree on a framework and on definitions for the errors in the approximation. Ozawa developed such a framework for that, on which I based my analysis."

A key aspect in Branciard's research is that quantum theory describes the states of a quantum system, their evolution and measurements in geometric terms – that is, physical states are vectors in a high-dimensional, complex Hilbert space, and measurements are represented by projections onto certain orthogonal bases of this high-dimensional space. "I made the most of this geometric picture to derive my new relations," Branciard explains. "Namely, I represented ideal and approximate measurements by vectors in a similar (but real) space, and translated the errors in the approximations into distances between the vectors. The incompatibility of the two observables to be approximated gave constraints on the possible configuration of those vectors in terms of the angles between the vectors." By then looking for general constraints on real vectors in a large-dimensional space, and on how close they can be from one another when some of their angles are fixed, Branciard was able to derive his relation between the errors in the approximate joint measurement.

Branciard again notes that he used the framework developed mainly by Ozawa, who proposed to quantify the errors in the approximations by the statistical deviations between the approximations and their ideal measurements. In this framework, any measurement can be used to approximate any other measurement, in that the statistical deviation defines the error. However, the advantage of Branciard's new relation over previously derived ones is that it is, as he described it above, tight. "It does not only tell that certain values are forbidden," he points out, "but also shows that the bounds they impose can be reached. In fact," he illustrates, "I could show how to saturate my new relation for any pair of observables A and B and for any , and reach all optimal error values eA and eB, whether one wants a small eA at the price of having to increase eB, or vice versa."

Moreover, he continues, the fact that it is tight is relevant experimentally, if one aims at testing these kinds of relations. "Showing that a given relation is satisfied is trivial if the relation is universally valid, since any measurement should satisfy it. What is less trivial is to show experimentally that one can indeed reach the bound of a tight relation. Experimental techniques now allow one to perform measurements down to the limits imposed by quantum theory, which makes the study of error-tradeoff relations quite timely. Also," he adds, "the tightness of error-tradeoff relations may be crucial if one considers applications such as the security of quantum communications: If one uses such relations to study how quantum theory restricts the possible actions of an eavesdropper, it will not be enough to say what cannot be done using simply a valid relation, but also what can be done when quantified by a tight relation."

In Branciard's framework, the error-disturbance scenario initially considered by Heisenberg can be seen as a particular case of the joint measurement scenario, in that an approximate measurement of the first observable and a subsequent measurement of the then-disturbed incompatible second observable, taken together, constitute an approximate measurement of both observables. More specifically, the second measurement is only approximated because it is performed on the system after it has been disturbed by the first measurement.

"Hence, in my framework," Branciard summarizes, "any constraint on approximation errors in joint measurements also applies to the error-disturbance scenario, in which the error on the second observable is interpreted as its disturbance and error-tradeoff relations simply imply error-disturbance relations. In fact," he adds, "while the error-disturbance case is a particular case of the more general joint measurement scenario, it's actually more constrained. This is because in that scenario the approximation of the second observable is done via the actual measurement of precisely that observable after the system has been disturbed by the approximate measurement of the first observable." This restricts the possible strategies for approximating a joint measurement, and as a consequence stronger constraints can generally be derived for errors versus disturbances rather than for error tradeoffs.

Branciard gives a specific example. "Suppose the second observable can produce two possible measurement results – for example, +1 or -1 – that could correspond to measuring spin or in a given direction. In the error-disturbance scenario, the approximation of the 2nd observable – that is, the actual measurement of that observable on the disturbed system – is restricted to produce either the result +1 or the result -1. However, in a more general scenario of approximate joint measurements, it may give lower errors in my framework to approximate the measurement by outputting other measurement results, say 1/2 or -3. For these reasons, one can in general actually derive error-disturbance relations that are stronger than error-tradeoff relations, as shown in my paper."

The uncertainty principle is one of the main tenets of quantum theory, and is a crucial feature for applications in quantum information science, such as quantum computing, quantum communications, quantum cryptography, and quantum key distribution. "Standard uncertainty relations in terms of statistical indeterminacy for exclusive measurements are already used to prove the security of quantum key distribution," Branciard points out. "In a similar spirit, it may also be possible to use the joint measurement version of the uncertainty principle to analyze the possibility for quantum information applications. This would, however, probably require the expression of error-tradeoff relations in terms of information, by quantifying the limited information gained on each observable, rather than talking about errors."

Looking ahead, Branciard describes possible directions for future research. "As mentioned, in order to make the most of the joint measurement version of the uncertainty principle and be able to use it to prove, for instance, the security of quantum information applications, it would be useful to express it in terms of information-theoretic – that is, entropic – quantities. Little has been studied in this direction, which would require developing a general framework to correctly quantify the partial information gained in approximate joint measurements, and then derive entropic uncertainty relations adapted to the scenarios under consideration."

Beyond its possible applications for quantum information science, Branciard adds, the study of the uncertainty principle brings new insights on the foundations of quantum theory – and for Branciard, some puzzling questions in quantum foundations include why does quantum theory impose such limits on measurements, and why does it contain so many counterintuitive features, such as quantum entanglement and non locality?

"A link has recently been established between standard uncertainty relations and the nonlocality of any theory. Studying this joint measurement aspect of the ," Branciard concludes, "may bring new insights and give a more complete picture of quantum theory by offering to address these metaphysical questions – which have been challenging physicists and philosophers since the invention of quantum theory – from a new perspective."

Explore further: New principle may help explain why nature is quantum

More information: Error-tradeoff and error-disturbance relations for incompatible quantum measurements, PNAS April 23, 2013 vol. 110 no. 17 6742-6747, doi:10.1073/pnas.1219331110

Related:
1Universally valid reformulation of the Heisenberg uncertainty principle on noise and disturbance in measurement, Physical Review A 67, 042105 (2003), doi:10.1103/PhysRevA.67.042105

Related Stories

Playing quantum tricks with measurements

Feb 15, 2013

A team of physicists at the University of Innsbruck, Austria, performed an experiment that seems to contradict the foundations of quantum theory—at first glance. The team led by Rainer Blatt reversed a ...

Scientists cast doubt on renowned uncertainty principle

Sep 07, 2012

Werner Heisenberg's uncertainty principle, formulated by the theoretical physicist in 1927, is one of the cornerstones of quantum mechanics. In its most familiar form, it says that it is impossible to measure ...

More certainty on uncertainty's quantum mechanical role

Oct 04, 2012

Scientists who study the ultra-small world of atoms know it is impossible to make certain simultaneous measurements, for example finding out both the location and momentum of an electron, with an arbitrarily ...

New principle may help explain why nature is quantum

May 14, 2013

Like small children, scientists are always asking the question 'why?'. One question they've yet to answer is why nature picked quantum physics, in all its weird glory, as a sensible way to behave. Researchers ...

Recommended for you

CERN: World-record current in a superconductor

22 hours ago

In the framework of the High-Luminosity LHC project, experts from the CERN Superconductors team recently obtained a world-record current of 20 kA at 24 K in an electrical transmission line consisting of two ...

High power laser sources at exotic wavelengths

Apr 14, 2014

High power laser sources at exotic wavelengths may be a step closer as researchers in China report a fibre optic parametric oscillator with record breaking efficiency. The research team believe this could ...

Novel technique opens door to better solar cells

Apr 14, 2014

A team of scientists, led by Assistant Professor Andrivo Rusydi from the Department of Physics at the National University of Singapore's (NUS) Faculty of Science, has successfully developed a technique to ...

User comments : 14

Adjust slider to filter visible comments by rank

Display comments: newest first

vacuum-mechanics
1.3 / 5 (12) May 28, 2013
"A link has recently been established between standard uncertainty relations and the nonlocality of any theory. Studying this joint measurement aspect of the uncertainty principle," Branciard concludes, "may bring new insights and give a more complete picture of quantum theory by offering to address these metaphysical questions – which have been challenging physicists and philosophers since the invention of quantum theory – from a new perspective."

Actually, a more complete picture of quantum theory could be done via understanding physical mechanism of the theory, which is then also provided us with the working mechanism of the uncertainty relations as below…
http://www.vacuum...19〈=en
Ober
3.4 / 5 (10) May 28, 2013
Vacuum FFS, are you a spam bot? Please stop posting the same thing for every article you read. Instead of advertising a hypothesis non-stop, post something different, PLEASE!!!!!!!!
If I moderated this site, I would simply mute, or ban you from posting, as you have nothing to add that you haven't already posted!!!! It is now the norm for most people to simply skip your posts. So isn't your incessant need to post the same thing counter productive to your hypothesis advertising? Will you EVER learn this? I used to give your ideas some consideration, but I now detest your posts with a passion.

WAKE UP and SMELL THE FLOWERS!!!
beleg
1 / 5 (3) May 28, 2013
"why does quantum theory impose such limits on measurements, and why does it contain so many counterintuitive features, such as quantum entanglement and non locality?"

Read more at: http://phys.org/n...html#jCp

Ahh, because the assumption that the quantum state vector is the complete informational theoretical description begs those questions?

LarryD
1 / 5 (2) May 28, 2013
Yes, beleg, I suppose that might be true. But let's take away the math and get to a common sense everyday meaning.(Ha! that's provoking for a start...never mind eh?) We have the universe 'out there' so vast that a layman might find impossible to imagine. Then there is the quantum level which is tiny again, almost impossible to understand...and here we are stuck in the middle. To investigate the levels we HAVE to build gigantic machines (compared to our size) and although I admire greatly those engineers who make them there is always going to a margin of error. 'Why does quantum theory impose...' well maybe it's just because we can't ruddy well get down there and do a 'hands on job' to find out...or maybe the limits are imposed by us.
All tings are relative (not SR or GR relative)...blah blah...I'd better stop before I get phiosophical.
kochevnik
1.7 / 5 (6) May 29, 2013
This measurement may be surprisingly useful. For in human existence there is always duality and uncertainty. This is expressed in real numbers, always with a shrinking and exploding component, and an understood implicit origin of zero. Real numbers a composition of that which is known and that which shrinks to the infinitesimal as a requirement to obtain that knowledge. Man balances himself upon a circle which is an unstable repeller, forever diverging points simultaneously into the infinite and infinitesimal
antialias_physorg
4.3 / 5 (6) May 29, 2013
Why does quantum theory impose...

Well, that's the ebauty of it - it's not dependent on a failure in understanding or what type of machinery you use to test. The uncertainty isn't an artifact of us being clumsy - it's innate to things (otherwise we wouldn't observe a lot of effects we DO observe - like interference or tunneling. Without either there'd be quite some products in your home that wouldn't work.)

The thing is that it's not our failure of common sense - it's that common sense is an evolutionary trait adapted to that in-between range (rather outside the areas of 'cosmically huge' and 'subatomically small').
Trying to fit common sense to these extreme areas where it isn't adapted to makes no...erm...common sense.
beleg
1 / 5 (3) May 29, 2013
@LarryD
AA addresses your comment better than I am able to address your comment.
Explaining (our) perceptions with concepts outside our senses is anything but common.
Yet, as AA states, this is common sense.
LarryD
1 / 5 (2) May 29, 2013
Thanks, antialas, you've proved my point '...otherwise we wouldn't observe a lot of effects we DO observe - like interference or tunneling...' we have to use machines to observe and machines are probably imperfect, made by imperfect beings. But I think you misunderstand me. Please ponder another, er...experiment. I go to a shop and buy a new camera, tripod etc. As I surely would I go around taking photos of this and that including using the tripod and remote so that I....you know what I mean. But when I download the photos to my comp every one is blurred. What do I do? I check exposure etc. and find all is correct. Now what do I do? I take the camera back to shop and complain. I want the camera to give pictures as I see the objects. What if the world is blurred.
Yes, I know, all very unlikely but it was 'uncommon-common sense' (qm, em etc) that made the camera possible yet I/we insist on they shoud 'report' my/our values.
I'm with you guys, love these new experiments but sometimes...
Noumenon
2.2 / 5 (34) May 29, 2013
why does quantum theory impose such limits on measurements, and why [..] counterintuitive features, such as [..] entanglement and non locality?


The a-priori intellectual faculties for which the mind evolved to synthesize and order experience at the macroscopic scale, are in effect, conceptual artifacts, and are therefore effectively artificial as applied to the quantum realm.

In other words, acquiring intuitive understanding Presumes a conceptual form, which may not be physically justified, though epistemologically necessary.

Such concepts,... locality, counterfactuality, space, time, separability, and causality,... necessary for intuitive understanding at the macro scale, are exposed as a mind dependent artificial synthesis, when applied to the quantum realm.

This is why physics can not provide knowledge of 'Independent Reality', and why 'Objective Realism' is therefore invalid. It can only provide predictive knowledge of experienced reality, i.e. empirical reality.
smd
2.3 / 5 (3) May 29, 2013
Well said, Noumenon, antialias and LarryD. I've been making this case for months and have grown weary of the endless, often hostile naive conceptualizations that others use when pushing back on my posts.

(FYI, I'm the author of this and many other features on quantum mechanics (and other research papers) appearing on Phys.org and Medical Xpress.)
Noumenon
1.9 / 5 (30) May 29, 2013
Hello smd, yes, five years of subjecting comment readers to my references to Immanuel Kant's epistemology ('A Critique of Pure Reason') have invited attacks and misunderstandings.

Sometimes Bohr spoke as if he was aware of Kant, but only vaguely. Abraham Pais called Bohr the natural successor to Immanuel Kant. Heisenberg speaks of Kant as well in his 'Physics and Philosophy'.

Bernard D'espagnat, once a student of Luis De Broglie, has written on the subject, in his 'On Physics and Philosophy' and 'Veiled Reality'. He arrives at very similar epistemological conclusions as Kant and mentions him frequently in the book, though does use his line of reasoning, speaking instead purely in terms of qm interpretations. He even retains the notion of a Noumenal Reality,.. in his veiled reality notion.
beleg
1 / 5 (3) May 30, 2013
Do the last two comments mean we must discard Bell applications and make hidden variables salon fashionable again?
antialias_physorg
4.3 / 5 (6) May 30, 2013
we have to use machines to observe and machines are probably imperfect, made by imperfect beings.

The point here is that it doesn't matter how perfect or imperfect your machine is (or how perfect or imperfect the interpretations of the results are).
it could all be a mathematically perfect and you'd STILL get uncertainty in your measurements.

The imperfection/perfection debate only stems from our pschological need for thihgs to be either THIS or THAT (amnd not in between or many things at once). QM shows us that the universe isn't like that.

It is indeed uncertain at it's core - and not just because we aren't looking hard enough.

In your words: the world IS blurred.
Speaking from an information point of view: No measurement apparatus can ADD information that isn't there in the first place. And uncertainty is a measure of the limit of that information content in reality.
beleg
1 / 5 (2) May 30, 2013
Agreed. Great take and way to view this. Of course this isn't the last nail to the certainty of uncertainty. Finality is expressed in terms of something superseded, obsolete or abandoned.
Somewhat partially akin to evolution.

More news stories

CERN: World-record current in a superconductor

In the framework of the High-Luminosity LHC project, experts from the CERN Superconductors team recently obtained a world-record current of 20 kA at 24 K in an electrical transmission line consisting of two ...

Glasses strong as steel: A fast way to find the best

Scientists at Yale University have devised a dramatically faster way of identifying and characterizing complex alloys known as bulk metallic glasses (BMGs), a versatile type of pliable glass that's stronger than steel.

Making 'bucky-balls' in spin-out's sights

(Phys.org) —A new Oxford spin-out firm is targeting the difficult challenge of manufacturing fullerenes, known as 'bucky-balls' because of their spherical shape, a type of carbon nanomaterial which, like ...