Getting over The Hill

“You guys want refills? You look like you’re gonna be here a while.”

“Yes, thanks, Al. Your lattes are sooo good. And can we have some more paper napkins?”

“Sure, but don’t let ’em blow away or nothin’, OK? I hate havin’ to pick up the place.”

“They’ll stay put. Just a half-cup of mud for me, thanks.”

The Spring breeze has picked up a little so we hitch our chairs closer together. Susan reaches for a paper napkin, draws a curve. “Here’s another pattern you haven’t featured yet, Sy. It’s in every chemist’s mind when they think about reactions.”

“OK, I suppose this is molecules A and B on one side of some sort of wall and molecule C on the other.”

“It’ll be clearer if I label the axes. It’s a reaction between A and B to make C. The horizontal axis isn’t a distance, it’s a measure of the reaction’s progress toward completion. Beginning molecules to the left, completed reaction to the right, transition in the middle, see? The vertical axis is energy. We say the reaction is energetically favored because C is lower than A and B separately.”

“Then what’s the wall?”

“We call it the barrier. It’s some additional dollop of energy that allows the reaction happen. Maybe A or B have to be reconfigured before they can form an A~B transition state. That’s common in carbon chemistry, for instance. Carbon usually has four bonds, but you can get five‑bonded transition states. They usually don’t last very long, though.”

“Right, carbon and its neighbors prefer the tetrahedral shape. Five‑bonded carbon distorts the stable electron clouds. Heat energy shoves things into position, I suppose.”

“Often but hardly always. Especially for large molecules, heat’s more likely to jostle things out of position than put them together. That’s what cooking does.”

“The curve reminds me of particle accelerator physics, except it takes way more energy to overcome nuclear forces when you mash sub‑atomic thingies together.”

“Oh, yes, very similar in terms of that general picture — except that the C side could be multiple emitted particles.”

“So your sketch covers a processes everywhere, not just Chemistry. They all have different barrier profiles, then?”

“Of course. My drawing was just to give you the idea. Some barriers are high, some are low, either side may rise or fall exponentially or by some power of the distance, some are lumpy, it all depends. Some are even flat.”

“Flat, like no resistance at all?”

“Oh, yes. Hypergolic rocket fuel pairs ignite spontaneously when they mix. Water and alkali metals make flames — have you seen that video of metallic sodium dumped into a lake and exploding like mad? Awesome!”

“I can imagine, or maybe not. If heat energy doesn’t get molecules over that barrier, what does?”

“Catalysts, mostly. Some do their thing by capturing the reactants in adjacent sites, maybe doing a little geometry jiggling while they’re at it. Some play games with the electron states of one or both reactants. Anyhow, what they accomplish is speeding up a reaction by replacing the original barrier with one or more lower ones.”

“Wait, reaction speed depends on the barrier height? I’d expect either go or no‑go.”

“No, it’s usually more complicated than that. Umm … visualize tossing a Slinky toy into the air. Your toss gives it energy. Part of the energy goes into lifting it against Earth’s gravity, part into spinning motion and part into crazy wiggles and jangling, right? But if you toss just right, maybe half of the energy goes into just stretching it out. Now suppose there’s a weak spot somewhere along the spring. Most of your tosses won’t mess with the spot, but a pure stretch toss might have enough energy to break it apart.”

“Gotcha, the transition barrier might be a probability thing depending on how the energy’s distributed within A and B. Betcha tunneling can play a part, too.”

“Mm? Oh, of course, you’re a Physics guy so you know quantum. Yes, some reactions depend upon electrons or hydrogen atoms tunneling through a barrier, but hardly ever anything larger than that. Whoops, I’m due back at the lab. See ya.”

<inaudible> “Oh, I hope so.”

~~ Rich Olcott

Hysteria

<chirp, chirp> “Moire here.”

“Hi, Sy, it’s Vinnie again. Hey, I just heard something on NPR I wanted to check with you on.”

“What’s that?”

“They said that even with the vaccine and all, it’s gonna take years for us to get back to normal ’cause the economy’s hysterical. Does that mean it’s cryin’‑funny or just cryin’? Neither one seems to fit.”

“You’re right about the no‑fit. Hmm… Ah! Could the word have been ‘hysteresis‘?”

“Somethin’ like that. What’s it about?”

“It’s an old Physics word that’s been picked up by other fields. Not misused as badly as ‘quantum,’ thank goodness, but still. The word itself gives you a clue. Do you hear the ‘history‘ in there?”

“Hysteresis, history … cute. So it’s about history?”

“Yup. The classic case is magnetism. Take an iron nail, for instance. The nail might already be magnetized strongly enough to pick up a paper clip. If it can, you can erase the magnetism by heating the nail white‑hot. If the nail’s not magnetic you may be able to magnetize it by giving it a few hammer‑whacks while it’s pointed north‑south, parallel to Earth’s magnetic field. Things get more interesting if we get quantitative. A strong‑enough magnetic field will induce magnetism in that nail no matter what direction it’s pointed. Reverse that field’s direction and the nail stays magnetized, only less so. It takes a stronger reverse field to demagnetize the nail than it took to magnetize it in the first place. See how the history makes a difference?”

“Yeah, for some things.”

“And that’s the point. Some of a system’s properties are as fixed as the nail’s weight or chemical composition. However, it may have other properties we can’t understand without knowing the history. Usually we can’t even predict them without looking at deeper structures. Hysteresis highlights two more gaps in Newton’s Physics. As usual he’s got a good excuse because many history‑dependent phenomena couldn’t even be detected with 17th‑Century technology. We couldn’t produce controllable magnetic fields until the 19th Century, when Oersted and Ampere studied magnetism and electricity. We didn’t understand magnetic hysteresis until the 20th Century.”

“Haw! You’re talking history of history. Anyway, to me it looks like what’s going on is that the strong field gets the magnetic atoms in there to all point the same way and heat undoes that by shaking them up to point random‑like.”

“What about the reversing field?”

“Maybe it points some of the atoms in the other direction and that makes the nail less and less magnetic until the field is strong enough to point everything backwards.”

“Close enough. The real story is that the atoms, iron in this case, are organized in groups called domains. The direction‑switching happens at the domain level — battalions of magnetically aligned atoms — but we had no way to know that until 20th‑Century microscopy came along.”

“So it takes ’em a while to get rearranged, huh?”

“Mmm, that’d be rate-dependent hysteresis, where the difference between forward and backward virtually disappears if you go slow enough. Think about putting your hand slowly into a tub of water versus splashing in there. Slow in, slow out reverses pretty well, but if you splash the water’s in turmoil for quite a long time. Magnetic hysteresis, though, doesn’t care about speed except in the extreme case. It’s purely controlled by the strength of the applied field.”

“I’m thinking about that poor frog.”

‘You would go there, wouldn’t you? Yeah, the legendary frog in slowly heating water would be another history dependency but it’s a different kind. The nail’s magnetism only depends on atoms standing in alignment. A frog is a highly organized system, lots of subsystems that all have to work together. Warming water adds energy that will speed up some subsystems more than others. If Froggy exits the pot before things desynchronize too far then it can recover its original lively state. If it’s trapped in there you’ve got frog soup. By the way, it’s a myth that the frog won’t try to hop out if you warm the water slowly. Frogs move to someplace cool if they get hotter than their personal threshold temperature.”

“Frogs are smarter than legends, huh?”

~~ Rich Olcott

‘Twixt A Rock And A Vortex

A chilly late December walk in the park and there’s Vinnie on a lakeside bench, staring at the geese and looking morose. “Hi, Vinnie, why so down on such a bright day?”

“Hi, Sy. I guess you ain’t heard. Frankie’s got the ‘rona.”

Frankie??!? The guys got the constitution of an ox. I don’t think he’s ever been sick in his life.”

“Probably not. Remember when that bug going around last January had everyone coughing for a week? Passed him right by. This time’s different. Three days after he showed a fever, bang, he’s in the hospital.”

“Wow. How’s Emma?”

“She had it first — a week of headaches and coughing. She’s OK now but worried sick. Hospital won’t let her in to see him, of course, which is a good thing I suppose so she can stay home with the kids and their schoolwork.”

“Bummer. We knew it was coming but…”

“Yeah. Makes a difference when it’s someone you know. Hey, do me a favor — throw some science at me, get my mind off this for a while.”

“That’s a big assignment, considering. Let’s see … patient, pandemic … Ah! E pluribus unum and back again.”

“Come again?”

“One of the gaps that stand between Physics and being an exact science.”

“I thought Physics was exact.”

“Good to fifteen decimal places in a few special experiments, but hardly exact. There’s many a slip ‘twixt theory and practice. One of the slips is the gap between kinematic physics, about how separate objects interact, and continuum physics, where you’re looking at one big thing.”

“This is sounding like that Loschmidt guy again.”

“It’s related but bigger. Newton worked on both sides of this one. On the kinematics side there’s billiard balls and planets and such. Assuming no frictional energy loss, Newton’s Three Laws and his Law of Gravity let us calculate exact predictions for their future trajectories … unless you’ve got more than three objects in play. It’s mathematically impossible to write exact predictions for four or more objects unless they start in one of a few special configurations. Newton didn’t do atoms, no surprise, but his work led to Schrödinger’s equation for an exact description of single electron, single nucleus systems. Anything more complicated, all we can do is approximate.”

“Computers. They do a lot with computers.”

“True, but that’s still approximating. Time‑step by time‑step and you never know what might sneak in or out between steps.”

“What’s ‘continuum‘ about then? Q on Star trek?”

“Hardly, we’re talking predictability here. Q’s thing is unpredictability. A physics continuum is a solid or fluid with no relevant internal structure, just an unbroken mass from one edge to the other. Newton showed how to analyze a continuum’s smooth churning by considering the forces that act on an imaginary isolated packet of stuff at various points in there. He basically invented the idea of viscosity as a way to account for friction between a fluid and the walls of the pipe it’s flowing through.”

“Smooth churning, eh? I see a problem.”

“What’s that?”

“The eddies and whirlpools I see when I row — not smooth.”

“Good point. In fact, that’s the point I was getting to. We can use extensions of Newton’s technique to handle a single well‑behaved whirlpool, but in real life big whirlpools throw off smaller ones and they spawn eddies and mini‑vortices and so on, all the way down to atom level. That turns out to be another intractable calculation, just as impossible as the many‑body particle mechanics problem.”

“Ah‑hah! That’s the gap! Newton just did the simple stuff at both ends, stayed away from the middle where things get complicated.”

“Exactly. To his credit, though, he pointed the way for the rest of us.”

“So how can you handle the middle?”

“The same thing that quantum mechanics does — use statistics. That’s if the math expressions are average‑able which sometimes they’re not, and if statistical numbers are good enough for why you’re doing the calculation. Not good enough for weather prediction, for instance — climate is about averages but weather needs specifics.”

“Yeah, like it’s just started to snow which I wasn’t expecting. I’m heading home. See ya, Sy.”

“See ya, Vinnie. … Frankie. … Geez.

~~ Rich Olcott

Only a H2 in A Gilded Cage

“OK, Susan, you’ve led us through doing high-pressure experiments with the Diamond Anvil Cell and you’ve talked about superconductivity and supermagnetism. How do they play together?”

“It’s early days yet, Sy, but Dias and a couple of other research groups may have brought us a new kind of superconductivity.”

“Another? You talked like there’s only one.”

“It’s one of those ‘depends on how you look at it‘ things, Al. We’ve got ‘conventional‘ superconductors and then there are the others. The conventional ones — elements like mercury or lead, alloys like vanadium‑silicon — are the model we’ve had for a century. Their critical temperatures are generally below 30 kelvins, really cold. We have a 60‑year‑old Nobel‑winning theory called ‘BCS‘ that’s so good it essentially defines conventional superconductivity. BCS theory is based on quantum‑entangled valence electrons.”

“So I guess the unconventional ones aren’t like that, huh?”

“Actually, there seem to be several groups of unconventionals, none of which quite fit the BCS theory. Most of the groups have critical temperatures way above what BCS says should be the upper limit. There are iron‑based and heavy‑metals‑based groups that use non‑valence electrons. There are a couple of different carbon‑based preparations that are just mystical. There’s a crazy collection of copper oxide ceramics that can contain five or more elements. Researchers have come up with theories for each of them, but the theories aren’t predictive — they don’t give dependable optimization guidelines.”

“Then how do they know how to make one of these?”

“Old motto — ‘Intuition guided by experience.’ There are so many variables in these complex systems — add how much of each ingredient, cook for how long at what temperature and pressure, chill the mix quickly or anneal it slowly, bathe it in an electrical or magnetic field and if so, how strong and at what point in the process… Other chemists refer to the whole enterprise as witch’s‑brew chemistry. But the researchers do find the occasional acorn in the grass.”

“I guess the high‑pressure ploy is just another variable then?”

“It’s a little less random than that, Sy. If you make two samples of a conventional superconductor, using different isotopes of the same element, the sample with the lighter isotope has the higher critical temperature. That’s part of the evidence for BCS theory, which says that electrons get entangled when they interact with vibrations in a superconductor. At a given temperature light atoms vibrate at higher frequency than heavy ones so there’s more opportunity for entanglement to get started . That set some researchers thinking, ‘We’d get the highest‑frequency vibrations from the lightest atom, hydrogen. Let’s pack hydrogens to high density and see what happens.'”

“Sounds like a great idea, Susan.”

“Indeed, Al, but not an easy one to achieve. Solid metallic hydrogen should be the perfect case. Dias and his group reported on a sample of metallic hydrogen a couple of years ago but they couldn’t tell if it was solid or liquid. This was at 5 megabars pressure and their diamonds broke before they could finish working up the sample. Recent work has aimed at using other elements to produce a ‘hydrogen‑rich’ environment. When Dias tested H2S at 1.5 megabar pressure, they found superconductivity at 203 kelvins. Knocked everyone’s socks off.”

“Gold rush! Just squeeze and chill every hydrogen‑rich compound you can get hold of.”

“It’s a little more complicated than that, Sy. Extreme pressures can force weird chemistry. Dias reported that shining a green laser on a pressurized mix of hydrogen gas with powdered sulfur and carbon gave them a clear crystalline material whose critical temperature was 287 kelvins. Wow! A winner, for sure, but who knows what the stuff is? Another example — the H2S that Dias loaded into the DAC became H3S under pressure.”

“Wait, three hydrogens per sulfur? But the valency rules—”

“I know, Sy, the rules say two per sulfur. Under pressure, though, you get one unattached molecule of H2 crammed into the space inside a cage of H2S molecules. It’s called a clathrate or guest‑host structure. The final formula is H2(H2S)2 or H3S. Weird, huh? Really loads in the hydrogen, though.”

“Jupiter has a humungous magnetic field and deep‑down it’s got high‑density hydrogen, probably metallic. Hmmm….”

~~ Rich Olcott

Futile? Nope, Just Zero

“Megabar superconductivity.”

“Whoa, Susan. Too much information, too few words. Could you unpack that, please?”

“No problem, Sy. A bar is the barometric pressure (get it?) at sea level. A megabar is—”

“A million atmospheres, right?”

“Right, Al. So Ranga Dias and his crew were using their Diamond Anvil Cells to put their chemical samples under million-atmosphere pressures while they tested for superconductivity—”

“Like Superman uses?”

“Is he always like this, Sy?”

“Just when he gets excited, Susan. The guy loves Science, what can I say?”

“Sorry, Susan. So what makes conductivity into superconductivity?”

“Excellent question, Al. Answering it generated several Nobel Prizes and we still don’t have a complete explanation. I can tell you the what but I can’t give you a firm why. Mmm… what do you know about electrical resistance?”

“Just what we got in High School General Science. We built a circuit with a battery and a switch and an unknown resistor and a meter to measure the current. We figured the resistance from the voltage divided by the current. Or maybe the other way around.”

“You got it right the first try. The voltage drop across a resistor is the current times the resistance, V=IR so V/I=R. That’s for ordinary materials under ordinary conditions. But early last century researchers found that for many materials, if you get them cold enough the resistance is zero.”

“Zero? But … if you put any voltage across something like that it could swallow an infinite amount of current.”

“Whoa, Al, what’s my motto about infinities?”

“Oh yeah, Sy. ‘If your theory contains an infinity, you’ve left out physics that would stop that.’ So what’d stop an infinite current here?”

“The resistor wasn’t the only element in your experimental circuit. Internal resistance within the battery and meter would limit the current. Those 20th-century researchers had to use some clever techniques to measure what they had. Back to you, Susan.”

“Thanks, Sy. I’m going to remember that motto. Bottom line, Al, superconductors have zero resistance but only under the right conditions. You start with your test material, with a reasonable resistance at some reasonable temperature, and then keep measuring its resistance as you slowly chill it. If it’s willing to superconduct, at some critical temperature you see the resistance abruptly drop straight down to zero. The critical temperature varies with different materials. The weird thing is, once the materials are below their personal critical temperature all superconductors behave the same way. It’s seems to be all about the electrons and they don’t care what kind of atom they rode in on.”

“Wouldn’t copper superconduct better than iron?”

“Oddly enough, pure copper doesn’t superconduct at all. Iron and lead both superconduct and so do some weird copper-containing oxides. Oh, and superconductivity has another funny dependency — it’s blocked by strong magnetic fields, but on the other hand it blocks out weaker ones. Under normal conditions, a magnetic field can penetrate deep into most materials. However, a superconducting piece of material completely repels the field, forces the magnetic lines to go around it. That’s called the Meissner effect and it’s quantum and—”

“How’s it work?”

“Even though we’ve got a good theory for the materials with low critical temperature, the copper oxides and such are still a puzzle. Here’s a diagram I built for one of my classes…”

“The top half is the ordinary situation, like in a copper wire. Most of the current is carried by electrons near the surface, but there’s a lot of random motion there, electrons bouncing off of impurities and crystal defects and boundaries. That’s where ordinary conduction’s resistance comes from. Compare that with the diagram’s bottom half, a seriously simplified view of superconduction. Here the electrons act like soldiers on parade, all quantum‑entangled with each other and moving as one big unit.”

“The green spirals?”

“They represent an imposed magnetic field. See the red bits diving into the ordinary conductor? But the superconducting parade doesn’t make space for the circular motion that magnetism tries to impose. The force lines just bounce off. Fun fact — the supercurrent itself generates a huge magnetic field but only outside the superconductor.”

“How ’bout that? So how is megabar superconductivity different?”

~~ Rich Olcott

Bridging A Paradox

<chirp, chirp> “Moire here.”

“Hi, Sy. Vinnie. Hey, I’ve been reading through some of your old stuff—”

“That bored, eh?”

“You know it. Anyhow, something just don’t jibe, ya know?”

“I’m not surprised but I don’t know. Tell me about it.”

“OK, let’s start with your Einstein’s Bubble piece. You got this electron goes up‑and‑down in some other galaxy and sends out a photon and it hits my eye and an atom in there absorbs it and I see the speck of light, right?”

“That’s about the size of it. What’s the problem?”

“I ain’t done yet. OK, the photon can’t give away any energy on the way here ’cause it’s quantum and quantum energy comes in packages. And when it hits my eye I get the whole package, right?”

“Yes, and?”

“And so there’s no energy loss and that means 100% efficient and I thought thermodynamics says you can’t do that.”

“Ah, good point. You’ve just described one version of Loschmidt’s Paradox. A lot of ink has gone into the conflict between quantum mechanics and relativity theory, but Herr Johann Loschmidt found a fundamental conflict between Newtonian mechanics, which is fundamental, and thermodynamics, which is also fundamental. He wasn’t talking photons, of course — it’d be another quarter-century before Planck and Einstein came up with that notion — but his challenge stood on your central issue.”

“Goody for me, so what’s the central issue?”

“Whether or not things can run in reverse. A pendulum that swings from A to B also swings from B to A. Planets go around our Sun counterclockwise, but Newton’s math would be just as accurate if they went clockwise. In all his equations and everything derived from them, you can replace +t with ‑t to make run time backwards and everything looks dandy. That even carries over to quantum mechanics — an excited atom relaxes by emitting a photon that eventually excites another atom, but then the second atom can play the same game by tossing a photon back the other way. That works because photons don’t dissipate their energy.”

“I get your point, Newton-style physics likes things that can back up. So what’s Loschmidt’s beef?”

“Ever see a fire unburn? Down at the microscopic level where atoms and photons live, processes run backwards all the time. Melting and freezing and chemical equilibria depend upon that. Things are different up at the macroscopic level, though — once heat energy gets out or randomness creeps in, processes can’t undo by themselves as Newton would like. That’s why Loschmidt stood the Laws of Thermodynamics up against Newton’s Laws. The paradox isn’t Newton’s fault — the very idea of energy was just being invented in his time and of course atoms and molecules and randomness were still centuries away.”

“Micro, macro, who cares about the difference?”

“The difference is that the micro level is usually a lot simpler than the macro level. We can often use measured or calculated micro‑level properties to predict macro‑level properties. Boltzmann started a whole branch of Physics, Statistical Mechanics, devoted to carrying out that strategy. For instance, if we know enough about what happens when two gas molecules collide we can predict the speed of sound through the gas. Our solid‑state devices depend on macro‑level electric and optical phenomena that depend on micro‑level electron‑atom interactions.”

“Statistical?”

“As in, ‘we don’t know exactly how it’ll go but we can figure the odds…‘ Suppose we’re looking at air molecules and the micro process is a molecule moving. It could go left, right, up, down, towards or away from you like the six sides of a die. Once it’s gone left, what are the odds it’ll reverse course?”

“About 16%, like rolling a die to get a one.”

“You know your odds. Now roll that die again. What’s the odds of snake‑eyes?”

“16% of 16%, that’s like 3 outa 100.”

“There’s a kajillion molecules in the room. Roll the die a kajillion times. What are the odds all the air goes to one wall?”

“So close to zero it ain’t gonna happen.”

“And Boltzmann’s Statistical Mechanics explained why not.”

“Knowing about one molecule predicts a kajillion. Pretty good.”

San Francisco’s Golden Gate Bridge, looking South
Photo by Rich Niewiroski Jr. / CC BY 2.5

~~ Rich Olcott

Free Energy, or Not

From: Richard Feder <rmfeder@fortleenj.com>
To: Sy Moire <sy@moirestudies.com>
Subj: Questions

What’s this about “free energy”? Is that energy that’s free to move around anywhere? Or maybe the vacuum energy that this guy said is in the vacuum of space that will transform the earth into a wonderful world of everything for free for everybody forever once we figure out how to handle the force fields and pull energy out of them?


From: Sy Moire <sy@moirestudies.com>
To: Richard Feder <rmfeder@fortleenj.com>

Subj: Re: Questions

Well, Mr Feder, as usual you have a lot of questions all rolled up together. I’ll try to take one at a time.

It’s clear you already know that to make something happen you need energy. Not a very substantial definition, but then energy is an abstract thing it took humanity a couple of hundred years to get our minds around and we’re still learning.

Physics has several more formal definitions for “energy,” all clustered around the ability to exert force to move something and/or heat something up. The “and/or” is the kicker, because it turns out you can’t do just the moving. As one statement of the Second Law of Thermodynamics puts it, “There are no perfectly efficient processes.”

For example, when your car’s engine burns a few drops of gasoline in the cylinder, the liquid becomes a 22000‑times larger volume of hot gas that pushes the piston down in its power stroke to move the car forward. In the process, though, the engine heats up (wasted energy), gases exiting the cylinder are much hotter than air temperature (more wasted energy) and there’s friction‑generated heat all through the drive train (even more waste). Improving the drive train’s lubrication can reduce friction, but there’s no way to stop energy loss into heated-up combustion product molecules.

Two hundred years of effort haven’t uncovered a usable loophole in the Second Law. However, we have been able to quantify it. Especially for practically important chemical reactions, like burning gasoline, scientists can calculate how much energy the reaction product molecules will retain as heat. The energy available to do work is what’s left.

For historical reasons, the “available to do work” part is called “free energy.” Not free like running about like ball lightning, but free in the sense of not being bound up in jiggling heated‑up molecules.

Vacuum energy is just the opposite of free — it’s bound up in the structure of space itself. We’ve known for a century that atoms waggle back and forth within their molecules. Those vibrations give rise to the infrared spectra we use for remote temperature sensing and for studying planetary atmospheres. One of the basic results of quantum mechanics is that there’s a minimum amount of motion, called zero‑point vibration, that would persist even if the molecule were frozen to absolute zero temperature.

There are other kinds of zero‑point motion. We know of two phenomena, the Casimir effect and the Lamb shift, that can be explained by assuming that the electric field and other force fields “vibrate” at the ultramicroscopic scale even in the absence of matter. Not vibrations like going up and down, but like getting more and less intense. It’s possible that the same “vibrations” spark radioactive decay and some kinds of light emission.

Visualize space being marked off with a mesh of cubes. In each cube one or more fields more‑or‑less periodically intensify and then relax. The variation strength and timing are unpredictable. Neighboring squares may or may not sync up and that’s unpredictable, too.

The activity is all governed by yet another Heisenberg’s Uncertainty Principle trade‑off. The stronger the intensification, the less certain we can be about when or where the next one will happen.

What we can say is that whether you look at a large volume of space (even an atom is ultramicroscopicly huge) or a long period of time (a second might as well be a millennium), on the average the intensity is zero. All our energy‑using techniques involve channeling energy from a high‑potential source to a low‑potential sink. Vacuum energy sources are everywhere but so are the sinks and they all flit around. Catching lightning in a jar was easy by comparison.

Regards,
Sy Moire.

~~ Rich Olcott

Question Time

Cathleen unmutes her mic. “Before we wrap up this online Crazy Theories contest with voting for the virtual Ceremonial Broom, I’ve got a few questions here in the chat box. The first question is for Kareem. ‘How about negative evidence for a pre-mammal civilization? Played-out mines, things like that.‘ Kareem, over to you.”

“Thanks. Good question but you’re thinking way too short a time period. Sixty‑six million years is plenty of time to erode the mountain a mine was burrowing into and take the mining apparatus with it.

“Here’s a different kind of negative evidence I did consider. We’re extracting coal now that had been laid down in the Carboniferous Era 300 million years ago. At first, I thought I’d proved no dinosaurs were smart enough to dig up coal because it’s still around where we can mine it. But on second thought I realized that sixty-six million years is enough time for geological upthrust and folding to expose coal seams that would have been too deeply buried for mining dinosaurs to get at. So like the Silurian Hypothesis authors said, no conclusions can be drawn.”

“Nice response, Kareem. Jim, this one’s for you. ‘You said our observable universe is 93 billion lightyears across, but I’ve heard over and over that the Universe is 14 billion years old. Did our observable universe expand faster than the speed of light?‘”

“That’s a deep space question, pun intended. The answer goes to what we mean when we say that the Hubble Flow expands the Universe. Like good Newtonian physicists, we’re used to thinking of space as an enormous sheet of graph paper. We visualize statements like, ‘distant galaxies are fleeing away from us‘ as us sitting at one spot on the graph paper and those other galaxies moving like fireworks across an unchanging grid.

“But that’s not the proper post-Einstein way to look at the situation. What’s going on is that we’re at our spot on the graph paper and each distant galaxy is at its spot, but the Hubble Flow stretches the graph paper. Suppose some star at the edge of our observable universe sent out a photon 13.7 billion years ago. That photon has been headed towards us at a steady 300000 kilometers per second ever since and it finally reached an Earth telescope last night. But in the meantime, the graph paper stretched underneath the photon until space between us and its home galaxy widened by a factor of 3.4.

“By the way, it’s a factor of 3.4 instead of 6.8 because the 93 billion lightyear distance is the diameter of our observable universe sphere, and the photon’s 13.7 billion lightyear trip is that sphere’s radius.

“Mmm, one more point — The Hubble Flow rate depends on distance and it’s really slow on the human‑life timescale. The current value of the Hubble Constant says that a point that’s 3×1019 kilometers away from us is receding at about 70 kilometers per second. To put that in perspective, Hubble Flow is stretching the Moon away from us by 3000 atom‑widths per year, or about 1/1300 the rate at which the Moon is receding because of tidal friction.”

“Nice calculation, Jim. Our final question is for Amanda. ‘Could I get to one of the other quantum tracks if I dove into a black hole and went through the singularity?‘”

“I wouldn’t want to try that but let’s think about it. Near the structure’s center gravitational intensity compresses mass-energy beyond the point that the words ‘particle’ and ‘quantum’ have meaning. All you’ve got is fields fluctuating wildly in every direction of spacetime. No sign posts, no way to navigate, you wouldn’t be able to choose an exit quantum track. But you wouldn’t be able to exit anyway because in that region the arrow of time points inward. Not a sci‑fi story with a happy ending.”

“<whew> Alright, folks, time to vote. Who presented the craziest theory? All those in favor of Kareem, click on your ‘hand’ icon. … OK. Now those voting for Jim? … OK. Now those voting for Amanda? … How ’bout that, it’s a tie. I guess for each of you there’s a parallel universe where you won the virtual Ceremonial Broom. Congratulations to all and thanks for such an interesting evening. Good night, everyone.”

~~ Rich Olcott

Too Many Schrödingers

Cathleen takes back control of the conference software. “Thanks, Jim. OK, the final contestant in our online Crazy Theories contest is the winner of our last face-to-face event where she told us why Spock and horseshoe crabs both have green blood. You’re up, Amanda.”

“Thanks, and hello out there. I can’t believe Jim and I are both talking about parallel universes. It’s almost like we’re thinking in parallel, right?”

<Jim’s mic is muted so he makes gagging motions>

“We need some prep work before I can talk about the Multiverse. I’m gonna start with this heat map of North America at a particular time. Hot in the Texas panhandle, cool in British Columbia, no surprise. You can do a lot with a heat map — pick a latitude and longitude, it tells you the relative temperature. Do some arithmetic on the all numbers and you can get average temperature, highs and lows, front strength in degrees per mile, lots of stuff like that.

“You build this kind of map by doing a lot of individual measurements. If you’re lucky you can summarize those measurements with a function, a compact mathematical expression that does the same job — pick a latitude and longitude, it tells you the value. Three nice things about functions — they take up a lot less space than a map, you can use straightforward mathematical operations on them so getting statistics is less work than with a map, and you can form superpositions by adding functions together.”

Cathleen interrupts. “Amanda, there’s a question in the chat box. ‘Can you give an example of superposition?’

“Sure. You can superpose simple sine‑wave functions to describe chords for sound waves or blended colors for light waves, for instance.

“Now when we get to really small‑scale thingies, we need quantum calculations. The question is, what do quantum calculations tell us? That’s been argued about for a hundred years because the values they generate are iffy superpositions. Twenty percent of this, eighty percent of that. Everybody’s heard of that poor cat in Schrödinger’s box.

“Many researchers say the quantum values are relative probabilities for observing different results in an experiment — but most of them carefully avoid worrying about why the answers aren’t always the same. Einstein wanted to know what Bohr was averaging over to get his averages. Bohr said it doesn’t matter, the percentages are the only things we can know about the system and it’s useless to speculate further.

“Hugh Everett thought bigger. He suggested that the correct quantum function for an observation should include experiment and experimenter. He took that a step further by showing that a proper quantum function would need to include anyone watching the experimenter and so on. In fact, he proposed, maybe there’s just one quantum function for the entire Universe. That would have some interesting implications.

“Remember Schrödinger’s catbox with two possible experimental results? Everett would say that his universal quantum function contains a superposition of two component sub-functions — happy Schrödinger with a live kitty and sad Schrödinger with a disposal problem. Each Schrödinger would be quite certain that he’d seen the definite result of a purely random operation. Two Schrödingers in parallel universes going forward.

“But in fact there’d be way more than two. When Schrödinger’s eye absorbs a photon, or maybe doesn’t, that generates another pair of universes. So do the quantum events that occur as his nerve cells fire, or don’t. Each Schrödinger moves into the future embedded in a dense bundle of parallel universes.”

Cathleen interrupts. “Another question. ‘What about conservation of mass?‘”

“Good question, whoever asked that. Everett doesn’t address that explicitly in his thesis, but I think he assumed the usual superposition math. That always includes a fix‑up step so that the sum of all the pieces adds up to unity. Half a Schrödinger mass on one track and half on the other. Even as each of them splits again and again and again the total is still only one Schrödinger‑mass. There’s other interpretation — each Schrödinger’s universe would be independent of the others so there’s no summing‑up to generate a conservation‑of‑mass problem. Your choice.

“Everett traded quantum weirdness for a weird Universe. Not much of a trade-off, I think.”

~~ Rich Olcott

Worlds Enough And Time Reversed

Cathleen unmutes her mic. “Thanks, Kareem. Our next Crazy Theory presentation is from one of my Cosmology students, Jim.”

“Thanks, Cathleen. Y’all have probably heard about how Relativity Theory and Quantum Mechanics don’t play well together. Unfortunately, people have mixed the two of them together with Cosmology to spawn lots of Crazy Theories about parallel universes. I’m going to give you a quick look at a couple of them. Fasten your seat belt, you’ll need it.

“The first theory depends on the idea that the Universe is infinitely large and we can only see part of it. Everything we can see — stars, galaxies, the Cosmic Microwave Background — they all live in this sphere that’s 93 billion lightyears across. We call it our Observable Universe. Are there stars and galaxies beyond the sphere? Almost certainly, but their light hasn’t been in flight long enough to reach us. By the same token, light from the Milky Way hasn’t traveled far enough to reach anyone outside our sphere.

“Now suppose there’s an alien astronomer circling a star that’s 93 billion lightyears away from us. It’s in the middle of its observable universe just like we’re in the middle of ours. And maybe there’s another observable universe 93 billion lightyears beyond that, and so on to infinity. Oh, by the way, it’s the same in every direction so there could be an infinite number of locally-observable universes. They’re all in the same space, the same laws of physics rule everywhere, it’s just that they’re too far apart to see each other.

“The next step is a leap. With an infinite number of observable universes all following the same physical laws, probability says that each observable universe has to have twins virtually identical to it except for location. There could be many other people exactly like you, out there billions of lightyears away in various directions, sitting in front of their screens or jogging or whatever. Anything you might do, somewhere out there there’s at least one of you doing that. Or maybe a mirror image of you. Lots of yous in lots of parallel observable universes.”

“I don’t like that theory, on two grounds. First, there’s no way to test it so it’s not science. Second, I think it plays fast and loose with the notion of infinity. There’s a big difference between ‘the Universe is large beyond anything we can measure‘ and ‘the Universe is infinite‘. If you’ve been reading Sy Moire’s stuff you’ve probably seen his axiom that if your theory contains an infinity, you’ve left out physics that would stop that. Right, Cathleen?”

Cathleen unmutes her mic. “That quote’s good, Jim.”

“Thanks, so’s the axiom. So that’s one parallel universe theory. OK, here’s another one and it doesn’t depend on infinities. The pop‑science press blared excitement about time‑reversal evidence from the ANITA experiment in Antarctica. Unfortunately, the evidence isn’t anywhere as exciting as the reporting has been.

“The story starts with neutrinos, those nearly massless particles that are emitted during many sub‑atomic reactions. ANITA is one kind of neutrino detector. It’s an array of radio receivers dangling from a helium‑filled balloon 23 miles up. The receivers are designed to pick up the radio waves created when a high‑energy neutrino interacts with glacier ice, which doesn’t happen often. Most of the neutrinos come in from outer space and tell us about solar and stellar activity. However, ANITA detected two events, so‑called ‘anomalies,’ that the scientists can’t yet explain and that’s where things went nuts.

“Almost as soon as the ANITA team sent out word of the anomalies, over three dozen papers were published with hypotheses to account for them. One paper said maybe the anomalies could be interpreted as a clue to one of Cosmology’s long‑standing questions — why aren’t there as many antiprotons as protons? A whole gang of hypotheses suggest ways that maybe something in the Big Bang directed protons into our Universe and antiprotons into a mirror universe just like ours except charges and spacetime are inverted with time running backwards. There’s a tall stack of maybes in there but the New York Post and its pop‑sci allies went straight for the Bizarro parallel universe conclusion. Me, I’m waiting for more data.”

~~ Rich Olcott