Smack-dab in the middle

BridgeSee that little guy on the bridge, suspended halfway between all the way down and all the way up?  That’s us on the cosmic size scale.

I suspect there’s a lesson there on how to think about electrons and quantum mechanics.

Let’s start at the big end.  The physicists tell us that light travels at 300,000 km/s, and the astronomers tell us that the Universe is about 13.7 billion years old.  Allowing for leap years, the oldest photons must have taken about 4.3×1017 seconds to reach us, during which time they must have covered 1.3×1026 meters.  Double that to get the diameter of the visible Universe, 2.6×1026 meters.  The Universe probably is even bigger than that, but far as I can see that’s as far as we can see.

At the small end there’s the Planck length, which takes a little explaining.  Back in 1899, Max Planck published his epochal paper showing that light happens piecewise (we now call them photons).  In that paper, he combined several “universal constants” to derive a convenient (for him) universal unit of length: 1.6×10-35 meters.  It’s certainly an inconvenient number for day-to-day measurements (“Gracious, Junior, how you’ve grown!  You’re now 8×1034 Planck-lengths tall.”).  However, theoretical physicists have saved barrels of ink and hours of keyboarding by using Planck-lengths and other such “natural units” in their work instead of explicitly writing down all the constants.

Furthermore, there are theoretical reasons to believe that the smallest possible events in the Universe occur at the scale of Planck lengths.  For instance, some theories suggest that it’s impossible to measure the distance between two points that are closer than a Planck-length apart.  In a sense, then, the resolution limit of the Universe, the ultimate pixel size, is a Planck length.

sizelineSo that’s the size range of the Universe, from 1.6×10-35 up to 2.6×1026 meters. What’s a reasonable way to fix a half-way mark between them?

It makes no sense to just add the two numbers together and divide by two the way we’d do for an arithmetic average. That’d be like adding together the dime I owe my grandson and the US national debt — I could owe him 10¢ or $10, but either number just disappears into the trillions.

The best way is to take the geometrical average — multiply the two numbers and take the square root.  I did that.  It’s the X in the sizeline, at 6.5×10-5 meters, or about the diameter of a fairly large bacterium.  (In the diagram, VSC is the Vega Super Cluster, AG is the Andromeda Galaxy, and the numbers are those exponents of 10.)

That’s worth marveling at.  Sixty orders of magnitude between the size of the Universe and the size of the ultimate pixel.  Yet from blue whales to bacteria, Earth’s life just happens to occupy the half-dozen orders right in the middle of the range.  We think that’s it.

Could this be another case of the geocentric fallacy?  Humans were so certain that Earth was the center of the Universe, before Brahe and Galileo and Newton proved otherwise.  Is there life out there at scales much larger or much smaller than we imagine?

Who knows? But here’s an intriguing physics/quantum angle I’d like to promote.  We know a lot about structures bigger than us — solar systems and binary stars and galaxy clusters on up.  We know a few sizes and structures a bit smaller — viruses and molecules and atoms.  We’re aware of quarks and gluons that reside inside protons and atomic nuclei, but we don’t know their size or structure.

Even a proton is huge on the Planck-length scale.  At 1.8×10-15 meters the proton measures some 1020 Planck-lengths.  There’s as much scale-space between the Planck-length and the proton as there is between the Earth (1.3×107 meters) and the Universe.

It’s hard to believe that Terra infravita’s area has no structure whereas Terra supravita is so … busy.  The Standard Model’s “ultimate particles,” the electrons and photons and neutrinos and quarks and gluons, all operate down there somewhere.   It’s reasonable to suppose that they reflect a deeper architecture somewhere on the way down to the Planck-length foam.

Newton wrote (in Latin), “I do not make hypotheses.”  But golly, it’s tempting.

~~ Rich Olcott

There’s a lot of not much in Space

A while ago I drove from Denver to Fort Worth, and I was impressed. See, there’s a lot of not much in eastern Colorado. It’s pretty much the same in western Oklahoma except there’s less not much because there’s less of Oklahoma – but Texas has way more not much than anybody.

That gives Texas not much to brag about, but they do the best they can, bless their hearts.

What got me started on this rant was a a pair of astronomical factoids Katherine Kornei wrote in the Nov 2014 Discover magazine.

“If galaxies were shrunk to the size of apples, neighboring galaxies would be only a few meters apart….”
“If the stars within galaxies were shrunk to the size of oranges, they would be separated by 4,800 kilometers (3,000 miles).”

Apple orangeSo there’s a lot of not much between galaxies, but a whole lot more not much, relatively speaking, within them. I just measured an apple and an orange in my kitchen. They’re both about the same size, 3 inches in diameter, so I have no idea why she chose different fruits – perhaps she wanted to avoid comparing apples and oranges.

Anyway, if you felt like doing the galaxy visualization you could put two apple galaxies on the floor about 12 feet apart and then line up about 50 apples between them. A fair amount of space for more galaxies.

To see inside a galaxy you could put one orange star in Miami FL, and its on-the-average nearest orange neighbor in Seattle WA. Then you could set out a long skinny row of just about 63 million oranges in between. Oh, and on this scale the nearest galaxy would be about 2 billion miles (or 43 quadrillion oranges) away. Way more not much inside a galaxy than between two neighboring ones.

So if we squeeze all those apples and oranges together we’d get rid of all the empty space, right?

Not by a long shot. Nearly all those stars are balls of very hot gas, which means they’re made up of atoms crossing empty space inside the star to collide with other atoms. Relative to the size of the atoms, how much empty space is there inside the star?

Matryoshkii 1For example, every chemistry student learns that 6×1023 molecules of any gas take up a volume of 22.4 liters at normal Earth temperature and pressure. For a single-atom gas like helium that works out to about 22 atom-widths between atoms.

Now think about emptiness inside the Sun. If it’s a typical star (which it is) and if all of its atoms are hydrogen (which they mostly are) and if the average density of the Sun (1408 kg/m3) applied all the way down to the center of the Sun (which it doesn’t), and if we believe NASA’s numbers for the Sun (hey, why not?), then the average density works out to about 0.7 atom-widths between neighbors.

So no empty space to squeeze out of the Sun, eh? Well, actually there is quite a lot, because those atoms are mostly empty space, too.

OK, I cheated up there about the Sun, because virtually all of the Sun’s atoms have been dissociated into separated electrons and nuclei. The nucleus is much smaller than than its atom – by a factor of 60,000 or so. Think of a grape seed in the middle of a football field.

To sum it upward, we’ve got a set of Russian matryoshka dolls, one inside the next. At the center is a collection of grape seeds, billions and billions of them, each in their own football field. The football fields are all balled into a stellar orange (or maybe an apple), but there are billions of those crammed into a galactic apple (or maybe an orange) that’s about ten feet away from the nearest other piece of fruit.

As Douglas Adams wrote in Hitchhiker’s Guide to The Galaxy,

“Space … is big. Really big. You just won’t believe how vastly, hugely, mindbogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space…”

The thing to realize is that the function of all that space is to keep everything from being in the same place. That’s important.

~~ Rich Olcott

Prime years and such

I’ve liked 4s and 6s ever since when, but lately 3s and 7s have been cropping up.  A lot.  And they have a really weird connection with 2016.

For me New Year has always been an opportunity to inspect the upcoming year’s number for interesting properties.

Maybe the easiest way for a number to be interesting is to be prime, that is, not divisible by anything other than itself and one.  My Uncle Harold once proved to me that all odd numbers are prime.

“One’s a prime, and so are three and five.  How about seven?  Seven’s prime.  Nine?  Not a prime but we can throw that one out as experimental error.  Eleven?  Prime.  Thirteen?  Prime.  Case closed.”

They use that logic a lot in politics nowadays.

There are a few prime-ity tests that just need a quick glance.  Take 2015 for example.  Ends with a “5” so it’s got to be divisible by five.  Not a prime.  A number ending with a “0” is like ending with twice five so it’s not prime either.

Take 2016.  Ends in an even digit so it’s divisible by two.  Not a prime.  Moreover, it fails the “nines test” — add up all the digits (2+0+1+6=9).  If the total is nine or divisible by nine then the number itself is divisible by nine (and by three) so it’s non-prime.  2016 is also divisible by seven but that’s not as easy to diagnose.

That’s about it for quickies.  Beyond those tests you have to slog through dividing the target by every prime number from three up to the target’s square root.  Why stop there?  Because any factor bigger than the square root will have a partner smaller than the square root.

Remember Party Like It’s 1999 (prime)?  Very popular when the Artist Then Known As Prince produced it in 1982 (not a prime).  Unfortunately, we who were working on Y2K projects were too busy to party that year so we couldn’t celebrate 1999 being prime until it was all over.

Y2K itself, 2000, definitely wasn’t prime.  If you know that 1999 is prime you know 2000 can’t be because after you get past 1-2-3, no two adjacent numbers can be prime — one of them would have to be even.  Next-but-one can work, though: both 1997 and 1999 are prime.  Primes separated by two like that are twin primes.

If 2016 won’t be a prime year, is there another way it can be special?  Hmmm…  2016 isn’t a perfect square, nor is it the sum of two squares.  Neither its square nor its cube are particularly noteworthy, but the square PLUS the cube is kinda cute: their sum is 8,197,604,352 which contains every digit just once.

According to The On-Line Encyclopedia of Integer Sequences, 2016 is a hexagonal number.  Start with a dot.  Make that dot one corner of a hexagon of dots.  Then add a hexagon around that, one more dot per side,  keeping the original dot as a corner (like the plan for a starter motte-and-bailey castle)…Hexagonal numbers Keep going until the outermost hexagon has 32 dots along each edge.  All the hexagons together will have exactly 2016 dots.

The OEIS says that 2016 is a participant in at least 925 more special sequences, so I guess it’s a pretty cool number after all.

Those 3s and 7s?  Here they come….

My nominee for Puzzle King of The World is my good friend Jimmy.  I challenged him once to find the connection between

  • the British Army’s WWII section number (2701) for Alan Turing’s super-secret cryptography unit at Bletchley Park, and
  • Jean Valjean’s prisoner number (24601) in Les Misérables 

Turns out it’s all about the primes.  2701 is the product of two primes: 73×37.  24601 is also the product of two primes 73×337.   Better yet, both of the product expressions are palindromes in their digits (7337, 73337). To put whipped cream on top, I first noticed the connection during my 73rd year.

So then of course I went looking for other 3…7 and 7…3 primes. There aren’t a lot of them. Going all the way out to 1037 I found:

37 73
337 733
 (3,337 is 47×71, not a prime) 7,333
333,337 733,333

Pretty good symmetry there.

OK, back to number 2016. I asked Mathematica®, “How many different pairs of primes, like 1999 and 17, sum to 2016?”

What do you suppose the answer was?  Yup, “73.”

Oh, and the next prime year is 2017.  It’ll be great.

~~ Rich Olcott

The direction Newton avoided facing

Reading Newton’s Philosophiæ Naturalis Principia Mathematica is less challenging than listening to Vogon poetry.  You just have to get your head working like a 17th Century genius who had just invented Calculus and who would have deep-fried his right arm in rancid skunk oil before he’d admit to using any of his rival Leibniz’ math notations or techniques.

Newton II-II ellipseNewton was essentially a geometer. These illustrations (from Book 1 of the Principia) will give you an idea of his style.  He’d set himself a problem then solve it by constructing sometimes elaborate diagrams by which he could prove that certain components were equal or in strict proportion.

Newton XII-VII hyperbolaFor instance, in the first diagram (Proposition II, Theorem II), we see an initial glimpse of his technique of successive approximation.  He defines a sequence of triangles which as they proliferate get closer and closer to the curve he wants to characterize.

The lines and trig functions escalate in the second diagram (Prop XII, Problem VII), where he calculates the force  on a body traveling along a hyperbola.

Newton XLIV-XIV precessionThe third diagram is particularly relevant to the point I’ll finally get to when I get around to it.  In Prop XLIV, Theorem XIV he demonstrates something weird.  Suppose two objects A and B are orbiting around attractive center C, but B is moving twice as fast as A.  If C exerts an additional force on B that is inversely dependent on the cube of the B-C distance, then A‘s orbit will be a perfect circle (yawn) but B‘s will be an ellipse that rotates around C, even though no external force pushes it laterally.

In modern-day math we’d write the additional force as F∼(1/rBC3), but Newton verbalized it as “in a triplicate ratio of their common altitudes inversely.”  See what I mean about Vogon poetry?

Now, about that point I was going to get to.  It’s C, in the center of that circle.  If the force is proportional to 1/r3, what happens when r approaches zero?  BLOOIE, the force becomes infinite.

In the previous post we used geometry to understand the optical singularity at the center of the Christmas ball.  I said there that my modeling project showed me a deeper reason for a BLOOIE.  That reason showed up partway through the calculation for the angle between the axis and the ring of reflected  light.  A certain ratio came out to be (1-x)/2x, where x is proportional to the distance between the LED and the ball’s center.  Same problem: as the LED approaches the center, x approaches zero and BLOOIE.  (No problem when x is one, because the ratio is 0/2 which is zero which is OK.)

Singularities happen when the formula for something goes to infinity.

Now, Newton recognized that his central-force (1/rn)-type equations covered gravity and magnetism and even the inward force on the rim of a rotating wheel.  It’s surprising that he didn’t seem too worried about BLOOIE.

I think he had two excuses.  First, he was limited by his graphical methodology.  In most of his constructions, when a certain distance goes to zero there’s a general catastrophe — rectangles and triangles collapse to lines or even points, radii whirl aimlessly without a vertex to aim at…  His lovely derivations devolve into meaninglessness.  Further advances would depend on the  algebraic approach to Calculus taken by the detested Leibniz.

Second (here’s the hook for this post’s title), Newton was looking outward, not inward.  He was considering the orbits of planets and other sizable objects.  r is always the distance between object centers.  For sizable objects you don’t have to worry about r=0 because “center-to-center equals zero” never occurs.  If the Moon (radius 1080 miles) were to drop down to touch the Earth (radius 3960 miles), their centers would still be 5000 miles apart.  No BLOOIE.

Actually, there would be CRUMBLE instead of BLOOIE because a different physical model would apply — but that’s a tale for another post.

The moral of the story is this.  Mathematical models don’t care about infinities, but Nature does.  Any conditions where the math predicts an infinite value (for instance, where a denominator can become zero) are prime territory for new models that make better predictions.

~~ Rich Olcott

Circular Logic

We often read “singularity” and “black hole” in the same pop-science article.  But singularities are a lot more common and closer to us than you might think. That shiny ball hanging on the Christmas tree over there, for instance.  I wondered what it might look like from the inside.  I got a surprise when I built a mathematical model of it.

To get something I could model, I chose a simple case.  (Physicists love to do that.  Einstein said, “You should make things as simple as possible, but no simpler.”)

I imagined that somehow I was inside the ball and that I had suspended a tiny LED somewhere along the axis opposite me.  Here’s a sketch of a vertical slice through the ball, and let’s begin on the left half of the diagram…Mirror ball sketch

I’m up there near the top, taking a picture with my phone.

To start with, we’ll put the LED (that yellow disk) at position A on the line running from top to bottom through the ball.  The blue lines trace the light path from the LED to me within this slice.

The inside of the ball is a mirror.  Whether flat or curved, the rule for every mirror is “The angle of reflection equals the angle of incidence.”  That’s how fun-house mirrors work.  You can see that the two solid blue lines form equal angles with the line tangent to the ball.  There’s no other point on this half-circle where the A-to-me route meets that equal-angle condition.  That’s why the blue line is the only path the light can take.  I’d see only one point of yellow light in that slice.

But the ball has a circular cross-section, like the Earth.  There’s a slice and a blue path for every longitude, all 360o of them and lots more in between.  Every slice shows me one point of yellow light, all at the same height.  The points all join together as a complete ring of light partway down the ball.  I’ve labeled it the “A-ring.”

Now imagine the ball moving upward to position B.  The equal-angles rule still holds, which puts the image of B in the mirror further down in the ball.  That’s shown by the red-lined light path and the labeled B-ring.

So far, so good — as the LED moves upward, I see a ring of decreasing size.  The surprise comes when the LED reaches C, the center of the ball.  On the basis of past behavior, I’d expect just a point of light at the very bottom of the ball (where it’d be on the other side of the LED and therefore hidden from me).

Nup, doesn’t happen.  Here’s the simulation.  The small yellow disk is the LED, the ring is the LED’s reflected image, the inset green circle shows the position of the LED (yellow) and the camera (black), and that’s me in the background, taking the picture…g6z

The entire surface suddenly fills with light — BLOOIE! — when the LED is exactly at the ball’s center.  Why does that happen?  Scroll back up and look at the right-hand half of the diagram.  When the ball is exactly at C, every outgoing ray of light in any direction bounces directly back where it came from.  And keeps on going, and going and going.  That weird display can only happen exactly at the center, the ball’s optical singularity, that special point where behavior is drastically different from what you’d expect as you approach it.

So that’s using geometry to identify a singularity.  When I built the model* that generated the video I had to do some fun algebra and trig.  In the process I encountered a deeper and more general way to identify singularities.

<Hint> Which direction did Newton avoid facing?

* – By the way, here’s a shout-out to Mathematica®, the Wolfram Research company’s software package that I used to build the model and create the video.  The product is huge and loaded with mysterious special-purpose tools, pretty much like one of those monster pocket knives you can’t really fit into a pocket.  But like that contraption, this software lets you do amazing things once you figure out how.

~~ Rich Olcott

And now for some completely different dimensions

Terry Pratchett wrote that Knowledge = Power = Energy = Matter = Mass.  Physicists don’t agree because the units don’t match up.

Physicists check equations with a powerful technique called “Dimensional Analysis,” but it’s only theoretically related to the “travel in space and time” kinds of dimension we discussed earlier.

Place setting LMTIt all started with Newton’s mechanics, his study of how objects affect the motion of other objects.  His vocabulary list included words like force, momentum, velocity, acceleration, mass, …, all concepts that seem familiar to us but which Newton either originated or fundamentally re-defined. As time went on, other thinkers added more terms like power, energy and action.

They’re all linked mathematically by various equations, but also by three fundamental dimensions: length (L), time (T) and mass (M). (There are a few others, like electric charge and temperature, that apply to problems outside of mechanics proper.)

Velocity, for example.  (Strictly speaking, velocity is speed in a particular direction but here we’re just concerned with its magnitude.)   You can measure it in miles per hour or millimeters per second or parsecs per millennium — in each case it’s length per time.  Velocity’s dimension expression is L/T no matter what units you use.

Momentum is the product of mass and velocity.  A 6,000-lb Escalade SUV doing 60 miles an hour has twice the momentum of a 3,000-lb compact car traveling at the same speed.  (Insurance companies are well aware of that fact and charge accordingly.)  In terms of dimensions, momentum is M*(L/T) = ML/T.

Acceleration is how rapidly velocity changes — a car clocked at “zero to 60 in 6 seconds” accelerated an average of 10 miles per hour per second.  Time’s in the denominator twice (who cares what the units are?), so the dimensional expression for acceleration is L/T2.

Physicists and chemists and engineers pay attention to these dimensional expressions because they have to match up across an equal sign.  Everyone knows Einstein’s equation, E = mc2. The c is the velocity of light.  As a velocity its dimension expression is L/T.  Therefore, the expression for energy must be M*(L/T)2 = ML2/T2.  See how easy?

Now things get more interesting.  Newton’s original Second Law calculated force on an object by how rapidly its momentum changed: (ML/T)/T.  Later on (possibly influenced by his feud with Leibniz about who invented calculus), he changed that to mass times acceleration M*(L/T2).  Conceptually they’re different but dimensionally they’re identical — both expressions for force work out to ML/T2.

Something seductively similar seems to apply to Heisenberg’s Area.  As we’ve seen, it’s the product of uncertainties in position (L) and momentum (ML/T) so the Area’s dimension expression works out to L*(ML/T) = ML2/T.

SeductiveThere is another way to get the same dimension expression but things aren’t not as nice there as they look at first glance.  Action is given by the amount of energy expended in a given time interval, times the length of that interval.  If you take the product of energy and time the dimensions work out as (ML2/T2)*T = ML2/T, just like Heisenberg’s Area.

It’s so tempting to think that energy and time negotiate precision like position and momentum do.  But they don’t.  In quantum mechanics, time is a driver, not a result.  If you tell me when an event happens (the t-coordinate), I can maybe calculate its energy and such.  But if you tell me the energy, I can’t give you a time when it’ll happen.  The situation reminds me of geologists trying to predict an earthquake.  They’ve got lots of statistics on tremor size distribution and can even give you average time between tremors of a certain size, but when will the next one hit?  Lord only knows.

File the detailed reasoning under “Arcane” — in technicalese, there are operators for position, momentum and energy but there’s no operator for time.  If you’re curious, John Baez’s paper has all the details.  Be warned, it contains equations!

Trust me — if you’ve spent a couple of days going through a long derivation, totting up the dimensions on either side of equations along the way is a great technique for reassuring yourself that you probably didn’t do something stupid back at hour 14.  Or maybe to detect that you did.

~~ Rich Olcott

Heisenberg’s Area

Unlike politicians, scientists want to know what they’re talking about when they use a technical word like  “Uncertainty.”  When Heisenberg laid out his Uncertainty Principle, he wasn’t talking about doubt.  He was talking about how closely experimental results can cluster together, and he was putting that in numbers.

ArrowsThink of Robin Hood competing for the Golden Arrow.  For the showmanship of the thing, Robin wasn’t just trying to hit the target, he wanted his arrow to split the Sheriff’s.  If the Sheriff’s shot was in the second ring (moderate accuracy, from the target’s point of view), then Robin’s had to hit exactly the same off-center location (still moderate accuracy but great precision).  The Heisenberg Uncertainty Principle (HUP) is all about precision (a.k.a, range of variation).

We’ve all encountered exams that were graded “on the curve.”  But what curve is that?  I can say from personal experience that it’s extraordinarily difficult to create an exam where  the average grade is 75.  I want to give everyone the chance to show what they’ve learned.  Each student probably learned only part of what’s in the unit, but I won’t know which part until after the exam is graded.  The only way to be fair is to ask about everything in the unit.  Students complained that my tests were really hard because to get 100 they had to know it all.

Translating test scores to grades for a small class was straightforward.  I would plot how many papers got between 95 and 100, how many got 90-95, etc, and look at the graph.  Nearly always it looked like the top example.  TestsThere’s a few people who clearly have the material down pat; they clearly earned an “A.”  Then there’s a second group who didn’t do as well as the A’s but did significantly better than the rest of the class — they earned a “B.”  As the other end there’s a (hopefully small) group of students who are floundering.  Long-term I tried to give them extra help but short-term I had no choice but to give them an “F.”

With a large class those distinctions get blurred and all I saw (usually) was a single broad range of scores, the well-known “bell-shaped curve.”  If the test was easy the bell was centered around a high score.  If the test was hard that center was much lower.  What’s interesting, though, is that the width of that bell for a given class stayed pretty much the same.  The curve’s width is described by a number called the standard deviation (SD), proportional to the width at half-height.  If a student asked, “What’s my score?” I could look at the curve for that exam and say there’s a 66% chance that the score was within one SD of the average, and a 95% chance that it was within two SD’s.

The same bell-shape also shows up in research situations where a scientist wants to measure some real-world number, be it an asteroid’s weight or elephant gestation time.  He can’t know the true value, so instead he makes many replicate measurements or pays close attention to many pregnant elephants.  He summarizes his results by reporting the average of all the measurements and also the SD calculated from those measurements.  Just as for the exams, there’s a 95% chance that the true value is within two SD’s of the average.  The scientist would say that the SD represents the uncertainty of the measured average.

Which is what Heisenberg’s inequality is about.  Heisenberg area 1He wrote that the product of two paired uncertainties (like position and momentum) must be larger than that teeny “quantum of action,” h.  There’s a trade-off.  We can refine our measurement of one variable but we’ll lose precision on the other.  If we plot results for one member of the pair against results for the other, there’s no linkage between their average values.  However, there will be a rectangle in the middle representing the combined uncertainty.

Heisenberg tells us that the minimum area of that rectangle is a constant.

It’s a very small rectangle, area = h/4π = 0.5×10-34 Joule-sec, but it’s significant on the scale of atoms — and maybe on the scale of the Universe (see next week).

~~ Rich Olcott

Heisenberg’s trade-offs

KiteA kite floating on the breeze.  Optimal work-life balance.  Smoothly functioning free markets.  The Heisenberg Uncertainty Principle.  Why would an alien from another planet recognize the last one but maybe not the others?

The kite is a physical object, intentionally built by humans to human scale.  The next two are idealized theoretical constructs, goals to be approached but rarely achieved.  The Heisenberg Uncertainty Principle (HUP) is fundamental to how the Universe works.

The first three are each in a dynamic equilibrium that is constantly buffeted by competing forces.  The HUP comes straight out of the deep math for where those forces come from.  Kites and work stress and markets may be peculiar to Earth, but the HUP is in play on every planet and star.

In the last post we saw that thanks to the HUP we can precisely identify an oboe’s pitch if it plays forever.  We can know precisely when a pitchless cymbal crashed.  But it’s mathematically impossible to get both exact pitch and exact time for the same sound.  Thank goodness, we can have imprecise knowledge of both quantities and actually play some music.

We determine a pitch (cycles per second) by counting sound waves passing during a given duration — and that limits our knowledge.  We can’t know that a wave has passed unless we see at least two peaks.  Our observation period must be at least long enough to see two peaks.  To put it the other way, the pitch must be high enough to give us at least two peaks during the time we’re watching.  This isn’t quantum mechanics, it’s just arithmetic, but it’s basic to physics.

Mathematically the HUP is as simple as Einstein’s E=mc2 equation, except the HUP is an inequality:

[A-uncertainty] x [B-uncertainty] ≥ h / 4π

where A and B are two paired quantities like pitch and duration.

TNT(That h is Planck’s constant, “the quantum of action,” 6.6×10-34 joule-sec.  That’s a very small number indeed but it shows up everywhere in quantum physics.  To put h in scale, one gram of TNT packs 4184 joules of explosive energy.  TNT has a detonation velocity of 6900 meters/sec and density of 1.60 gram/cm3, so we can figure a 1-gram cube of the stuff would burn for 1.2 microseconds and generate a total action of about 5×10-3 joule-sec.  Divide that by Avagadro’s number to get that one molecule of TNT is good for 10-26 joule-sec.  That’s about 10 million times h.  So, yeah, h is small.)

Back to the HUP inequality.  A and B are our paired quantities.  The standard examples that everyone’s heard of are position and momentum, as in the old physicist joke, “I haven’t a clue where I’m going, but I know how fast I’m getting there.”  For things that are tied to a central attractor like an atomic nucleus, A and B would be angular position and angular momentum.  If you’re into solid-state physics you may have run into another example — the number of electrons in a superconducting current is paired with a metric that reflects the degree of order in the conducting medium.  One more pair is energy and time, but that’s a story for another week.

Balance 1But what’s in the HUP inequality isn’t A and B, but rather our uncertainty about each.  A billiard ball might be on the lip of the near cup or it can be all the way across the table — HUP won’t care.  What’s important to HUP is whether the ball is here plus/minus one inch, or here plus/minus a millionth of an inch.  Similarly, HUP doesn’t care how fast the ball is going, but it does care whether the speed is plus/minus one inch per second or plus/minus one millionth of an inch per second.  HUP tells us that we can know one of the pair precisely and the other not at all, or that we can know both imprecisely.  Furthermore, even the imprecision has a limit.

We can’t simultaneously know both A and B more precisely than that little teeny h, but some physicists believe h may have been big enough to launch our Universe.

Next week — HUP, two, three, four

~~ Rich Olcott

Don’t blame Heisenberg

There was the time I discovered that a chemical compound I’d made is destroyed by the light of the spectrometer I was using to study it. The NYT just ran an article about how biologists have a new-tech problem studying animals in the field because a camera drone can scare the critters away (or provoke an attack).  A teacher can’t shut down an ongoing bullying campaign because student chatter stops when they see him coming.  What’s the common thread in these situations?

You probably thought “Heisenberg,” but please don’t dis the poor guy for them.  You may have seen the for-real Heisenberg Uncertainty Principle in action, but only if you’re a physicist or a music-reading percussionist.  Rather, the incidents in the first paragraph are all examples of the Observer Effect, which is completely separate from the work of Werner H.

The confusion arises because the Observer Effect is often used in classroom explanations of the Heisenberg Uncertainty Principle (the HUP).  The Observer Effect could well apply pretty much anywhere there’s an observer and an observee (see photo), which is why research psychologists and police interrogators use one-way mirrors.

By contrast, the HUP is in play in only a few circumstances, chiefly audio and physics labs.  The key is that word uncertainty, because the HUP is all about the limits of our knowledge.  It says that there are certain pairs of quantities where we must trade off knowledge of one against knowledge of the other.  The more precisely we know the value of one, the more uncertain we are about the other one’s value.

drum notesLet’s start with sound.  Did you know that sheet music for a drummer doesn’t really use a “proper” staff with keys and all?  Oh, sure, they use a staff, sort of, but the “notes” indicate strokes rather than tones.  Here’s one variant of many notations out there.

Suppose an oboist plays a tone for you, that nice, long “A” that the orchestra tunes to.  (It’s generally the oboe playing that note, by the way, for two reasons.  First, the oboe uses very little air to produce its sound, so the oboist can hold that note much longer than a flautist or trumpeter could.  More important, though, is that the oboe simply isn’t adjustable — everyone else perforce has to re-tune to match up.)  The primary component of that “A” sound should be a wave of 440 cycles per second.

Now suppose the oboist plays that “A” in shorter and shorter bursts — half-note, quarter-note, etc., down to where all that comes out is a blip.  His fingering and embouchure don’t change, so he’s still playing an “A.” However, when the emitted sound wave is very short we can no longer identify the pitch because there aren’t enough cycles there.  We need at least 2 cycles in a known time period to be able to say how many cycles per second the tone has.

Now the oboist switches up an octave (880 cycles per second) with the same burst length.  That gives us twice as many cycles in the blip and we can identify the new pitch.  However, if he cuts the note’s length in half once more, then again we don’t have enough cycles to count.  The shorter the note, the more precisely we know when it sounded, but the less precisely we know what note it was.

A cymbal crash is basically the limiting case.  It has no distinct pitch (or the physicist would say it has a huge number of pitches that all die away after a few cycles).  Rather than tell the percussionist to play an unidentifiably short note, the composer says, “T’heck with it!” and writes an “X” somewhere on the staff.

And vice-versa — at the start of the oboist’s note the sound contained an mixture of other frequencies.  The interlopers eventually died out as the note proceeded.  There will be another mixing when the oboist runs out of breath.  We can only have a really pure tone if the note never starts and never ends — the poor oboist plays that one note forever.

Thank to Heisenberg, we can be confident that even Bach’s well-tempered clavier was imprecise.

Next week — more fun with Heisenberg.

~~ Rich Olcott

Buttered Cats — The QM perspective

You may have heard recently about the “buttered cat paradox,” a proposition that starts from two time-honored claims:

  • Cats always land on their feet.
  • Buttered toast always lands buttered side down.

“The paradox arises when one considers what would happen if one attached a piece of buttered toast (butter side up) to the back of a cat, then dropped the cat from a large height. …
“[There are those who suggest] that the experiment will produce an anti-gravity effect. They propose that as the cat falls towards the ground, it will slow down and start to rotate, eventually reaching a steady state of hovering a short distance from the ground while rotating at high speed as both the buttered side of the toast and the cat’s feet attempt to land on the ground.”

~~ en.wikipedia.org/wiki/Buttered_cat_paradox

After extensive research (I poked around with Google a little), I’ve concluded that no-one has addressed the situation properly from the quantum mechanical perspective. The cat+toast system in flight clearly meets the Schrödinger conditions — we cannot make an a priori prediction one way or the other so we must consider the system to be in a 50:50 mix of both positions (cat-up and cat-down).

In a physical experiment with a live cat it’s probable that cat+toast actually would be rotating. As is the case with unpolarized light, we must consider the system’s state to be a 50:50 mixture of clockwise and counter-clockwise rotation about its roll axis (defined as one running from the cat’s nose to the base of its tail). Poor kitty would be spinning in two opposing directions at the same time.

Online discussions of the problem have alluded to some of the above considerations. Some writers have even suggested that the combined action of the two opposing adages could generate infinite rotational acceleration and even anti-gravity effects. Those are clearly incorrect conclusions – the concurrent counter-rotations would automatically cancel out any externally observable effects. As to the anti-gravity proposal, not even Bustopher Jones is heavy enough to bend space like a black hole. Anyway, he has white spats.

However, the community appears to have completely missed the Heisenbergian implications of the configuration.

The Heisenberg Uncertainty Principle declares that it’s impossible to obtain simultaneous accurate values for two paired variables such as a particle’s position and momentum. The better the measurement of one variable, the less certain you can be of the other, and vice-versa. There’s an old joke about a cop who pulled a physicist to the side of the road and angrily asked her, “Do you have any idea how fast you were going?”  “I’m afraid not, officer, but I know exactly where I am.”

It’s less commonly known that energy and time are another such pair of variables – the stronger the explosion, the harder it is to determine precisely when it started.

Suppose now that our cat+toast system is falling slowly, perhaps in a low-gravity environment. The landing, when it finally occurs, will be gentle and extend over an arbitrarily long period of time. Accordingly, the cat will remain calm and may not even awake from its usual slumberous state.

Tom and toastBy contrast, suppose that cat+toast falls rapidly. The resulting impact will occur over a very small duration. As we would expect from Heisenberg’s formulation, the cat will become really really angry and with strong probability will attack the researcher in a highly energetic manner.

From a theoretical standpoint therefore, we caution experimentalists to take proper precautions in preparing a laboratory system to test the paradox.

Next week – Getting more certain about Heisenberg

~~ Rich Olcott