BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Certainty Of Uncertainty: Scientists Know Exactly How Well We Don't Know Things

Following
This article is more than 8 years old.

There are a whole bunch of words whose precise technical meaning in a scientific context differs from the common everyday meaning in ways that obstruct communication between scientists and the general public. The classic example is "theory," though the colloquial usage in physics at least tends to be fairly close to the everyday meaning. Intro physics classes also tend to spend some time on the distinction between "velocity" and "speed" which doesn't really exist in everyday usage but is critically important in some physics contexts (velocity is speed plus the direction of motion).

The most vexing of these, though, might be "uncertainty." As someone who both teaches introductory physics and fields a lot of questions about quantum physics, I'm constantly running into misconceptions and misuses of the whole idea of uncertainty (and a whole constellation of related concepts in probability and statistics). In everyday use, "uncertainty" is sort of a verbal shrug, a "who knows what's going to happen?", rather like the "Local News Anchor" entry in this awesome Math With Bad Drawings post. The uncertainty in a scientific measurement, on the other hand, is a rigorously defined and quantifiable thing. When we talk about the uncertainty in a scientific measurement, it's less an admission of ignorance than an expression of confidence: we're saying how tightly we're able to constrain what we don't know.

Of course, this is compounded by the fact that a lot of scientists aren't terribly good with handling uncertainty or explaining it to the general public, which leads to things like the replication crisis in psychology and this classic xkcd. There are big and thorny issues there, and I don't want to get into the gory details of how this ought to work; instead, I want to talk a little about the conceptual gap between uncertainty as often misunderstood, and uncertainty as it's meant in scientific contexts.

The crudest misunderstanding of uncertainty that I run into is on the context of intro physics labs, where we start trying to get students to associate uncertainties with their measurements. In this context, "uncertainty" is frequently conflated with "error," in the sense of "something was done incorrectly." The clearest expression of this misunderstanding is the use of "human error" as a catch-all category. Any difference between two measurements, or a measurement and an expected value is attributed to "human error."

This drives me nuts, as a professor, because as I tell students unfortunate enough to use it, "human error" means "we probably did it wrong." And if you suspect you might've done something wrong, you shouldn't write up your results and hand them in for a grade, you should re-do the experiment. The uncertainty in an experimental measurement is what's left over once you've convinced yourself that you didn't do anything wrong in making the measurement. People who are really serious about the business of precision measurement get kind of obsessive about this, something you can trace back hundreds of years. One of the first truly great precision measurements was Henry Cavendish's experiment to "weigh the Earth," now usually expressed as a measurement of the gravitational constant G. Cavendish set a high bar for subsequent measurements of tiny forces, an area of physics that continues to thrive.

One thing that often surprises students learning about how to handle uncertainty is that the best way to sort problems out is often to deliberately try to make them worse. When Cavendish feared his measurements might be influenced by magnetism, he replaced the non-magnetic lead spheres with actual magnets. When that didn't lead to a dramatic change, he could rule out magnetism as a source of problems, and more importantly put a quantitative limit on how big an effect magnetic forces might possibly have. When he discovered a real issue due to different rates of heating and cooling of the components of the apparatus, he nailed it down by deliberately heating parts with candles, and cooling them with blocks of ice. These measurements let him give a number for how big the effect of the smaller changes in temperature could possibly be.

Proper scientific uncertainty is not a hedge against "human error," it's a quantification of what's left over once you correct all the human errors. It's saying "while we are unable to perfectly control everything in the universe, we know that the uncontrolled effects can't be any bigger than this." And "this" can be very small indeed-- for things like recent tests of Lorentz symmetry (a key part of relativity) the uncertainty is is the 18th or 19th decimal place.

In quantum physics, of course, there's the famous Heisenberg Uncertainty Principle, which is another persistent source of misunderstandings of "uncertainty." In this case, it's something of a translation error-- in the early days, Heisenberg used something closer to "indeterminacy," which is arguably a better choice, but for whatever reason, the field eventually settled on "uncertainty."

In this case, the most common misconception stems from a semi-classical analogy. One of the common ways of describing how the Uncertainty Principle arises is as a perturbation in the process of measurement. This creates the impression that there's a "real" value to be found, if only you could come up with a sufficiently clever technique to measure it.

Quantum uncertainty, though, is better understood as a fundamental consequence of the dual particle and wave nature of everything in the universe. It's simply impossible for an object with both particle and wave characteristics to have a well-defined position and a well-defined velocity at the same time, as explained in this video I wrote for TED-Ed:

The incompatibility of these two classical sets of properties means that there will always be some spread in the possible positions and velocities of a quantum object. If you try to measure one of these properties for a single particle, you'll get one of the values from within that range, and if you repeat the measurement many times, you can trace out the full distribution of possible results. That's part of what's going on in the famous first image of a Bose-Einstein Condensate in rubidium:

The oblong shape of the BEC can be attributed in part to uncertainty-- it's confined more tightly in one direction than the other, which leads to an increased momentum uncertainty along that axis. when they release the BEC and let it expand, the atoms spread out more along that axis than the other, and since there are around 100,000 of them, they trace out the full range of possible values.

Again, though, the fact that this momentum is indeterminate doesn't mean it's completely up for grabs. The possible range of values is constrained very tightly, and you can also see this in the picture, from the fact that the BEC is a narrow "spike." The fact that quantum properties are determined probabilistically doesn't mean that anything goes-- we're not in the Han Solo portion of this bad math drawing (repeated link from above), but we have a very good idea of what results can possibly occur. And again, these constraints can be extremely tight-- the influence of vacuum fluctuations, which you can think of as a manifestation of a different sort of uncertainty, can be calculated and measured to 13 or 14 decimal places. That doesn't leave much wiggle room.

Finally, there's a sort of grand conceptual problem with phenomena like dark matter and dark energy. You will sometimes hear people pitching some alternative theory of physics say things like "Well, physicists say that 95% of the universe is this dark stuff, but they have no idea what it is, so anything is possible."

While it's true that the nature of dark matter remains uncertain-- we can't point to a specific exotic particle and say "this is what dark matter is made of"-- again, we are not totally ignorant. We can say with a great deal of certainty what it isn't, and place fairly tight constraints on what it might be. I highly recommend Neal Weiner's talk at Convergence as an overview of what we know about dark matter and the search for it. We can constrain its properties pretty tightly, and they allow very little room for fairly reasonable theories of exotic physics, let alone the ones that sound crazy.

The common theme of all of these misundertood uncertainties is that they're actually tightly constrained. There are a lot of things we don't know in science, but it's not a free-for-all. We know exactly how well we don't know things, and those bounds are getting tighter all the time.

Follow me on TwitterCheck out my website