Unlike politicians, scientists want to know what they’re talking about when they use a technical word like “Uncertainty.” When Heisenberg laid out his Uncertainty Principle, he wasn’t talking about doubt. He was talking about how closely experimental results can cluster together, and he was putting that in numbers.
Think of Robin Hood competing for the Golden Arrow. For the showmanship of the thing, Robin wasn’t just trying to hit the target, he wanted his arrow to split the Sheriff’s. If the Sheriff’s shot was in the second ring (moderate accuracy, from the target’s point of view), then Robin’s had to hit exactly the same off-center location (still moderate accuracy but great precision). The Heisenberg Uncertainty Principle (HUP) is all about precision (a.k.a, range of variation).
We’ve all encountered exams that were graded “on the curve.” But what curve is that? I can say from personal experience that it’s extraordinarily difficult to create an exam where the average grade is 75. I want to give everyone the chance to show what they’ve learned. Each student probably learned only part of what’s in the unit, but I won’t know which part until after the exam is graded. The only way to be fair is to ask about everything in the unit. Students complained that my tests were really hard because to get 100 they had to know it all.
Translating test scores to grades for a small class was straightforward. I would plot how many papers got between 95 and 100, how many got 90-95, etc, and look at the graph. Nearly always it looked like the top example. There’s a few people who clearly have the material down pat; they clearly earned an “A.” Then there’s a second group who didn’t do as well as the A’s but did significantly better than the rest of the class — they earned a “B.” As the other end there’s a (hopefully small) group of students who are floundering. Long-term I tried to give them extra help but short-term I had no choice but to give them an “F.”
With a large class those distinctions get blurred and all I saw (usually) was a single broad range of scores, the well-known “bell-shaped curve.” If the test was easy the bell was centered around a high score. If the test was hard that center was much lower. What’s interesting, though, is that the width of that bell for a given class stayed pretty much the same. The curve’s width is described by a number called the standard deviation (SD), proportional to the width at half-height. If a student asked, “What’s my score?” I could look at the curve for that exam and say there’s a 66% chance that the score was within one SD of the average, and a 95% chance that it was within two SD’s.
The same bell-shape also shows up in research situations where a scientist wants to measure some real-world number, be it an asteroid’s weight or elephant gestation time. He can’t know the true value, so instead he makes many replicate measurements or pays close attention to many pregnant elephants. He summarizes his results by reporting the average of all the measurements and also the SD calculated from those measurements. Just as for the exams, there’s a 95% chance that the true value is within two SD’s of the average. The scientist would say that the SD represents the uncertainty of the measured average.
Which is what Heisenberg’s inequality is about. He wrote that the product of two paired uncertainties (like position and momentum) must be larger than that teeny “quantum of action,” h. There’s a trade-off. We can refine our measurement of one variable but we’ll lose precision on the other. If we plot results for one member of the pair against results for the other, there’s no linkage between their average values. However, there will be a rectangle in the middle representing the combined uncertainty.
Heisenberg tells us that the minimum area of that rectangle is a constant.
It’s a very small rectangle, area = h/4π = 0.5×10-34 Joule-sec, but it’s significant on the scale of atoms — and maybe on the scale of the Universe (see next week).
~~ Rich Olcott