Schrödinger’s Elephant

Al’s coffee shop sits right between the Astronomy and Physics buildings, which is good because he’s a big Science fan.  He and Jeremy are in an excited discussion when Anne and I walk in.  “Two croissants, Al, and two coffees, black.”

“Comin’ up, Sy.  Hey, you see the news?  Big days for gravitational astronomy.”

Jeremy breaks in.  “There’s a Nobel Prize been announced —”

“Kip Thorne the theorist and Barry Barish the management guy —”

“and Rainer Weiss the instrumentation wizard —”

“shared the Physics prize for getting LIGO to work —”

“and it saw the first signal of a black hole collision in 2015 —”

“and two more since —”

“and confirmed more predictions from relativity theory —”

“and Italy’s got their Virgo gravitational wave detector up and running —”

“And Virgo and our two LIGOs, —”

“Well, they’re both aLIGOs now, being upgraded and all —”

“all three saw the same new wave —”

“and it’s another collision between black holes with weird masses that we can’t account for.  Who’s the lady?”

“Al, this is Anne.  Jeremy, close your mouth, you’ll catch a fly.”  (Jeremy blushes, Anne twinkles.)  “Anne and I are chasing an elephant.”

“Pleased to meetcha, Anne.  But no livestock in here, Sy, the Health Department would throw a fit!”

I grin.  “That’s exactly what Eddie said.  It’s an abstract elephant, Al.  We’ve been discussing entropy. Which is an elephant because it’s got so many aspects no-one can agree on what it is.  It’s got something to do with heat capacity, something to do with possibilities you can’t rule out, something to do with signals and information.  And Hawking showed that entropy also has something to do with black holes.”

“Which I don’t know much about, fellows, so someone will have to explain.”

Jeremy leaps in.  “I can help with that, Miss Anne, I just wrote a paper on them.”

“Just give us the short version, son, she can ask questions if she wants a detail.”

“Yessir.  OK, suppose you took all the Sun’s mass and squeezed it into a ball just a few miles across.  Its density would be so high that escape velocity is faster than the speed of light so an outbound photon just falls back inward and that’s why it’s black.  Is that a good summary, Mr Moire?”

“Well, it might be good enough for an Internet blog but it wouldn’t pass inspection for a respectable science journal.  Photons don’t have mass so the whole notion of escape velocity doesn’t apply.  You do have some essential elements right, though.  Black holes are regions of extreme mass density, we think more dense than anywhere else in the Universe.  A black hole’s mass bends space so tightly around itself that nearby light waves are forced to orbit its region or even spiral inward.  The orbiting happens right at the black hole’s event horizon, its thin shell that encloses the space where things get really weird.  And Anne, the elephant stands on that shell.”white satin and black hole“Wait, Mr Moire, we said that the event horizon’s just a mathematical construct, not something I could stand on.”

“And that’s true, Jeremy.  But the elephant’s an abstract construct, too.  So abstract we’re still trying to figure out what’s under the abstraction.”

“I’m trying to figure out why you said the elephant’s standing there.”

“Anne, it goes back to the event horizon’s being a mathematical object, not a real one.  Its spherical surface marks the boundary of the ultimate terra incognita.  Lightwaves can’t pass outward from it, nor can anything material, not even any kind of a signal.  For at least some kinds of black hole, physicists have proven that the only things we can know about one are its mass, spin and charge.  From those we can calculate some other things like its temperature, but black holes are actually pretty simple.”

“So?”

“So there’s a collision with Quantum Theory.  One of QT’s fundamental assumptions is that in principle we can use a particle’s current wave function to predict probabilities for its future.  But the wave function information disappears if the particle encounters an event horizon.  Things are even worse if the particle’s entangled with another one.”

“Information, entropy, elephant … it’s starting to come together.”

“That’s what he said.”

~~ Rich Olcott

Advertisements

At The Turn of A Card

Not much going on today.  I’m dealing myself a hand of solitaire when I hear a familiar fizzing sound.  “Hello, Anne.  Good to see you again.”

She’s freshened up that white satin outfit and is looking very good.  “Hello, Sy.  Busy?”

“Not so’s you’d notice it.  What can I do for you?”

“Can’t a girl just drop in when she wants to visit?  Playing with real cards, I see.  That’s good, but your tens and treys are frozen.”white satin and cards

“That’s the way the odds break sometimes.  The elephant‘s in the room.”

Entropy again?  What’s it look like this time?”

“These cards and surprise.  How surprised would you be if I were to draw a queen from the stock pile?”

“No queens showing, so some surprised but not very surprised.”

“You know me, I’m a physicist, we put numbers to things.  So put numbers to the situation.”

<sigh>  “OK, there are 52 cards in the deck and you’ve got … 28 cards in that triangle, so there are 24 left in the stock.  Four of them have to be queens.  Four out of 24 is one out of 6.”

“Or 17%.  And the odds for the queen of hearts?”

“I’m here so it’s 100% until I leave.  Oh, I know, you’re talking about the cards.  One in 24 or 4%.  So I’d be four times as surprised at seeing the heart queen as I would at seeing any of them.  Pooh.”

“Now how about the odds of drawing all four queens?”

“One in 24, times one in 23, times one in 22, times one in 21.  Whatever, it’s a very small number and I’d be very surprised.”

“Well, here’s where we get another look at the elephant.  There’s a definition of entropy that links directly to those percentages AND can handle extremely small ones.  What do you know about logarithms?”

“A little.  I read your   last   series  of  posts.”

“Wonderful, that simplifies things.  Let’s start with strange dissociation thought up by Claude Shannon to whom we owe the entire field of information theory.  His crucial insight was that he had to distinguish between information and meaning.”

“How can they be different?  If I say ‘green’ that means, well, green.”

“It’s all about context.  If you’re telling me what color something is, saying ‘green’ is telling me that the thing isn’t white or red or any of the other umm, nine colors I know the names of.  But if you’re telling me someone is really inexperienced then I know not to trust them with a complicated task that has to be done right the first time.  From Shannon’s point of view, the information is the signal ‘green,’ and the meaning is set by the context.”

“You’re going somewhere with this, I suppose?”

“Mm-hm.  In Shannon’s theory, the more surprising the message is, the more information it contains.  Remember when you told me that in one of your alternate realities you’d seen me wearing a green shirt?  That was a surprise and it told me you’d visited an unusual reality, because I rarely wear green.  If you’d told me the shirt was black or grey, that would have been much less surprising and much less informative.  Shannon’s trick was in putting numbers to that.”

“You’re just dragging this out, aren’t you?”

“No-no, only two more steps to the elephant.  First step is that Shannon defined a particular signal’s information content to be proportional to the negative of the logarithm of its probability.  Suppose I’m maybe 1% likely to wear green but equally likely to wear any of the other 11 colors.  Each of those colors has a 9% probability.  log10(1%) is -2.00, information content is 2.00, but -log10(9%) is only 1.04.  By Shannon’s definition when you said ‘green’ in this context, you gave me nearly double the information as any of the other color names.”

“Why’d you use base-10 logarithms?”

“Convenience.  It’s easy to figure log10(1%).  Information scientists tend to use base-2, physicists go for base-e.  Final step — Shannon took the information content of each possible signal, multiplied it by the probability of that signal, added those products together and called it the signal system’s information entropy. For our colors it’d be 2.0+(11×1.04)=13.44.  Regardez, voici l’éléphant!”

“Ooo, French!”

“Aimeriez-vous un croissant et un café?  My treat at Al’s.

~~ Rich Olcott

A log by any other name

“Hey, Mr Moire?”

“Yes, Jeremy?”

“What we did with logarithms and exponents.  You showed me how my Dad’s slide-rule uses powers of 10, but we did that compound interest stuff with powers of 1.1.  Does that mean we could make a slide-rule based on powers of any number?”

“Sure could, in principle, but it’d be a lot harder to use.  A powers-of-ten model works well with scientific notation.  Suppose you want to calculate the number of atoms in 5.3 grams of carbon.  Remember Avagadro’s number?”

“Ohhh, yeah, chem class etched that into my brain.  It’s 6.02×10²³ atoms per gram atomic weight.  Carbon’s atomic weight is 12, so the atom count would be (5.3 grams)×(6.02×10²³ atoms / 12 grams), whatever that works out to be.”

“Nicely set up.  With the slide-rule you’d do the 5.3×6.02/12 part, then take care of the ten-powers in your head or on a scrap of paper.  It’d be ugly to do that with a slide-rule based on powers of π, for example.  Although, once you get away from the slide-rule it’s perfectly possible to do log-and-exponent calculations on other bases.  A couple of them are real popular.  Base-2, for instance.”2-10-e logs

“Powers of two?  Oh, binary!   2, 4, 8, 16, like that.  And 1/2, 1/4, 1/8.  Hard to imagine what a base-2 slide-rule would look like — zero at one end, I suppose, and one at the other and lots of fractions in-between.”

“Well, no.  Is there a zero on your Dad’s base-10 slide-rule there?”

“Uh, no, the C scale has a one at each end.”

“The left-hand ‘1’ can stand for one or ten or a thousand or a thousandth.  Whatever you pick for it, the right-hand ‘1’ stands for ten times that.”

“Ah, then a base-2 slide-rule would also have ones at either end in binary but they’d mean numbers that differ by a factor of two.  But there’d still be a bunch of fractions in-between, right?”

“Right, but no zero anywhere.  Why not?”

“Oh, there’s no power-of-two that equals zero.”

“No power-of-anything that equals zero.  Except zero, of course, but zero-to-anything is still zero so that’s not much use for calculating.  On the other hand, anything to the zero power is 1 so log(1)=0 in every base system.”

“You said a couple of popular bases.  What’s the other one?”

“Euler’s number e=2.71828…  It’s actually closely related to that compound interest calculation you did.  There’s several ways to compute e, but the most relevant for us is the limit of [1+(1/n)]n as n gets very large.  Try that on your spreadsheet app.”

“OK, I’m loading B1 with =(1+(1/C1))^C1 and I’ll try different numbers in C1.  One hundred gives me 2.7048, a thousand gives me 2.7169 (diminishing returns, hey) — ah, a million sure enough comes up with 2.71828.”

“There you go.  Changing C1 to even bigger values would get you even closer to e‘s exact value but it’s one of those irrationals like π so you can only get better and better approximations.  You see the connection between that formula and the $×[1+(rate/n)]n formula?”

“Sure, but what use is it?  If that’s the e formula the rate is 100%.”

“You can think of e as what happens when growth is compounded continuously.  It’s not often used in retail financial applications, but it’s everywhere in advanced math and physics.  I don’t want to get too much into that because calculus, but here’s one specialness.  The exponential function ex is the only one whose slope at every point is equal to its value there.”

“Nice.  But we’ve been talking logs.  Are base-e logarithms special?”

“So special that they’ve got their own name — natural logarithms, as opposed to common logarithms, the base-10 kind that power slide-rules.  They’ve even got their own abbreviations — ln(x) or loge(x) as opposed to log(x) or log10(x).”

“What makes them ‘natural’?”

“That’s harder to answer.  The simplest way is to point out that you can convert a log on one base to any other base.  For instance, ln(10)=2.303 therefore e2.303=10=101.  So log10 of any number x is 2.303 times ln(x) and ln(x)=log10(x)/2.303.  There are loads of equations that look simple and neat in terms of ln but get clumsy if you have to plug in 2.303 everywhere.”

“Don’t want to be clumsy.”

~~ Rich Olcott

Powers to The People

“You say logarithms and exponents have to do with growth, Mr Moire?”Log Exp and slide rule captioned

“Mm-hm.  Did they teach you about compound interest in that Modern Living class, Jeremy?”

“Yessir.  Like if I took out a loan of say $10,000 at 10% interest, I’d owe $11,000 at the end of the first year and, um…, $12,100 after two years because the 10% applies to the interest, too.”

“Nice mental arithmetic.  So what you did was multiply that base amount by 1+10% the first year and (1+10%)² the second, right?”

“Well, that’s not the way I thought of it, but that’s the way it works out, alright.”

“So it’d be (1+10%)³ the third year and in general (1+rate)n after n years, assuming you don’t make any payments.”

“Sure.”

“OK, how do we have to revise that formula if the interest is compounded daily and you get lucky and pay it off in a lump sum after 19 months?”

“Can I use your whiteboard?”

“Go ahead.”

“OK, first thing to change is the rate, because the 10% was for the whole year.  We need to use 10%/365 inside those parentheses.  But then we’re counting time by days instead of years.  Each day we multiply the previous amount by another (1+10%/365), which makes the exponent be the number of days the loan is out, which is 19 times whatever the average number of days in a month is.”

“Why not just use 19×(365÷12)?”

“Can we do that?  In an exponent?”

“Perfectly legal, done in all the best circles.”

“So what we’ve got is
10000×[1+(10%/365)]19×(365÷12).

“Try poking that into your smartphone’s spreadsheet app and format it for dollars.”

“In spreadsheet-ese that’d be
=10000*(1+(0.1/365))^(19*(365/12)).
Hah!  The app took it, and comes up with … $11,715.31.  Lemme try that with two years that’s 24 months.  Now it’s $12,213.69.  Hey, that’s $123 more than two years compounded once-a-year.  Compounding more often generates more interest, doesn’t it?”

“Which is why daily compounding is the general rule in consumer lending.  But there’s a couple more lessons to be learned here.  One, you can do full-on arithmetic inside an exponent.  That’s what the log log scales are for on a slide rule.  Two, the expression you worked up has the form
base×(growth factor)(time function).
Any time you’re modeling something that grows or shrinks in some percentage-wise fashion, you’re going to have exponential expressions like that.”

“Hey, I tried compounding more often and it didn’t make much difference.  I put in 3650 instead of 365 and it only added 30¢ to the total.”

“Which gives me an idea.  Load up cells A1:A7 in your spreadsheet with this series: 1, 3, 10, 30, 100, 300, 1000.  Got it?”

“Ahhh … OK.  Now what?”

“Now load cell B1 with +10000*(1+(0.1/A1))^(24*(A1/12)).”

“Says $12,100.”

“Fine.  Now copy that cell down through B7.”

“Hmm…  The answers go up but by less and less.”

“Right.  Now highlight A1:B7 and tell your spreadsheet to generate a scatter plot connected by straight lines.”

“Gimme a sec … OK.  The line goes straight up, then straight across almost.”

“Final step — click on the x-axis and tell the program to use a logarithmic scale.”diminishing returns

“Hey, the x-numbers scrunch and wrap like on the A, B and K scales on Dad’s slide-rule.”

“Which is what you’d expect, right?  They both use logarithmic scales.  The slide-rule uses logarithms to do its arithmetic thing.  The graphing software lets you use logarithms to display big numbers together with small numbers.  But the neat thing about this graph is that it shows two different flavors of a general pattern.  Adding something, say 20, to a number to the left on the x-axis moves you a longer distance than adding the same amount somewhere over on the right.  That’s diminishing returns.”

“Look, the heeling-over curve shows diminishing returns from compounding interest more and more often.”

Exploding returns“Good.  Now copy A1:A7 by value into C1:C7 and generate a scatter plot of B1:C7. This time apply the logarithmic scale to the y-axis. This’ll show us how often we’d need to compound to get the yield on the x-axis.”

“Whoa, it blows up, like there’s no way to get up to $12,300.”

“Call it exploding returns.  Increasing the exponent increases the growth factor’s impact.  Beyond a threshold, a small change in the growth factor can make a huge difference in the result.”

“Seriously huge.”

“Exponentially huge.”

~~ Rich Olcott

Log-rhythmic gymnastics

I recognized the knock.  “Come on in, Jeremy, the door’s open.”

“Hi, Mr Moire.  Can you believe this weather?  Did Miss Anne like her gelato?  What’s this funny ruler thing that my Dad sent me?  He said they used it to send men to the moon.”

log rhythm

“No, yes, it’s called a slide rule, and he’s right — back in the 1960s engineers used slip-sticks like that when they couldn’t get to a four-function mechanical calculator.  Now, though, they’re about as useful as a cast-iron bath towel.  Kind of a shame, because the slide rule is based on mathematical principles that are fundamental to just about all of mathematical physics.”

“Like what?”

“The use of exponents, for one.  Add exponents to multiply, subtract to divide.  Quick — what’s 100×100×100?”

“Uhh…  Ten million?”

“Nup.  But if I recast that as 102×102×102=102+2+2?”

“106.  Oh, that’s a million.”

“See how easy?  We’ve known that kind of arithmetic since Archimedes.  The big advance was in the early 1600s when John Napier realized that the exponents didn’t have to be integers.  Take square roots, for example.  What’s the square root of 100?”

“Ten.”

“Sure — √100=√(102)=102/2=101=10.  Now write √10 with exponents.”

“Would it be 101/2?”

“Let’s see.  Do you have a spreadsheet app on that tablet you carry?”

“Sure.”

“OK, bring it up.  Poke =10^(0.5) into cell A1, and =A1^2 into A2.  What do you get?”

“Gimme a sec … the first cell says 3.162278 and the second says … exactly 10.”

“Or as exact as that software is set up for.  So what we’ve got is that 0.5 is a perfectly good power of ten, and exponent arithmetic works the same with it and all the other rational numbers that it does with integers.  Too big a leap, or are you OK with that?”

“OK, I suppose, but what does that have to do with this gadget getting people to the Moon?”

“Take a good look at at the C scale, the lowest one on the middle ruler that slides back and forth.  Are the numbers evenly spaced out?”

“No, they’re stretched out at the low end, scrunched together at the high end.”Slide rule 3“Look for 3.16 on there.  You read it like a ruler — the number before the decimal point shows as a digit, then you locate the fractional part with the high and low vertical lines.”

“Got it.  About halfway across.”

“It’s exactly on center if that’s a good slide rule.  A number’s distance along the scale should be proportional to the exponent of 10 (we call it the logarithm) that gives you that number.  The C scale’s left end is 1.0, its right end is 10.0, and 3.162 is halfway.”

“Ah, I see how it works.  Adding distances is like adding exponents.  So if I want to multiply 2 by 3 I slide the middle ruler until its 1 is against 2 on the D scale, then I look for 3 on the C scale and, yes! it’s right next to 6 on the D scale!  Oh and the A and B scales wrap twice in the same distance so they must be logarithms for squares?  Hah, there’s 10 on B right above where I found 3.16 on CK wraps three times so it must be cubes, but why did they call it K?”

“Blame the Germans, who spell ‘cube‘ with a ‘k‘.  What do you suppose CI does?”

“Hmm, it runs backwards.  Adding with CI would be like subtracting distances which would be like dividing, so … I’ll bet it’s ‘C-Inverse‘!”

“You win the mink-lined frying pan.  So you see how even a simple 5- or 6-scale device can do a lot of calculation.  The really fancy ones had as many as a dozen scales on each side, ready for doing trigonometry, compound interest, all kinds of things.  That’s the quick compute power the rocket engineers used back in the 50s.”

“Logarithms did all that, eh?”

“Yup, that and the inverse operation, exponentiation.  Of course, you don’t have to build your log and exponent system around 10.  If you’re into information theory you might use powers of 2.  If you’re doing physics or pure math you’re probably going to use a different base, Euler’s number e=2.71828.  Looks weird, but it’s really useful because calculus.”

“So logarithms do calculating.  You said something about physical principles?”

“Calculating growth, for instance…”

~~ Rich Olcott

Two Sharp Dice

<further along our story arc>  “Want a refill?”

“No, I’ve had enough.  But I could go for some dessert.”

“Nothing here in the office, care for some gelato?”

We take the elevator down to Eddie’s on 2.  Things are slow.  Jeremy’s doing homework behind the gelato display.  Eddie’s at the checkout counter, rolling some dice.  He gives the eye to her white satin.  “You’ll fit right in when the theater crowd gets here, Miss.  Don’t know about you, Sy.”White satin and dice

“Fitting in’s not my thing, Eddie.  This is my client, Anne.  What’s with the bones?”

“Weirdest thing, Sy.  I’m getting set up for the game after closing (don’t tell nobody, OK?) but these dice gotta be bad somehow.  I roll just one, I get every number, but when I roll the two together I get nothin’ but snake-eyes and boxcars.”

I shoot Anne a look.  She shrugs.  I sing out, “Hey, Jeremy, my usual chocolate-hazelnut combo.  For the lady … I’d say vanilla and mint.”

She shoots me a look.  “How’d you know?”

I shrug.  “Lucky guess.  It’s a good evening for the elephant.”

“Hey, no livestock in here, Sy, the Health Department would throw a fit!”

“It’s an abstract elephant, Eddie.  Anne and I’ve been discussing entropy.  Which is an elephant because it’s got so many aspects no-one can agree on what it is.”

“So it’s got to do with luck?”

“With counting possibilities.  Suppose you know something happened, but there’s lots of ways it could have happened.  You don’t know which one it was.  Entropy is a way to measure what’s left to know.”

“Like what?”

“Those dice are an easy example.  You throw the pair, they land in any of 36 different ways, but you don’t know which until you look, right?”

Dice odds

“Yeah, sure.  So?”

“So your uncertainty number is 36.  Suppose they show 7.  There’s still half-a-dozen ways that can happen — first die shows 6, second shows 1, or maybe the first die has the 1 and the second has the 6, and so on.  You don’t know which way it happened.  Your uncertainty number’s gone down from 36 to 6.”

“Wait, but I do know something going in.  It’s a lot more likely they’ll show a 7 than snake-eyes.”

“Good point, but you’re talking probability, the ratio of uncertainty numbers.  Half-a-dozen ways to show a 7, divided by 36 ways total, means that 7 comes up seventeen throws out of a hundred.  Three times out of a hundred you’ll get snake-eyes.  Same odds for boxcars.”

“C’mon, Sy, in my neighborhood little babies know those odds.”

“But do the babies know how odds combine?  If you care about one event OR another you add the odds, like 6 times out of a hundred you get snake-eyes OR boxcars.  But if you’re looking at one event AND another one the odds multiply.  How often did you roll those dice just now?”

“Couple of dozen, I guess.”

“Let’s start with three.  Suppose you got snake-eyes AND you got snake-eyes AND you got snake-eyes.  Odds on that would be 3×3×3 out of 100×100×100 or 27 out of a million triple-throws.  Getting snake-eyes or boxcars 24 times in a row, that’s … ummm … less than one chance in a million trillion trillion sets of 24-throws.  Not likely.”

“Don’t know about the numbers, Sy, but there’s something goofy with these dice.”

Anne cuts in.  “Maybe not, Eddie.  Unusual things do happen.  Let me try.”  She gets half-a-dozen 7s in a row, each time a different way.  “Now you try,” and gives him back the dice.  Now he rolls an 8, 9, 10, 11 and 12 in order.  “They’re not loaded.  You’re just living in a low-probability world.”

“Aw, geez.”

“Anyway, Eddie, entropy is a measure of residual possibilities — alternate conditions (like those ways to 7) that give identical results.  Suppose a physicist is working on a system with a defined number of possible states.  If there’s some way to calculate their probabilities, they can be plugged into a well-known formula for calculating the system’s entropy.  The remarkable thing, Anne, is that what you calculate from the formula matches up with the heat capacity entropy.”

“Here’s your gelato, Mr Moire.   Sorry for the delay, but Jennie dropped by and we got to talking.”

Anne and I trade looks.  “That’s OK, Jeremy, I know how that works.”

~~ Rich Olcott

Enter the Elephant, stage right

Anne?”

“Mm?”

“Remember when you said that other reality, the one without the letter ‘C,’  felt more probable than this one?”

“Mm-mm.”

“What tipped you off?”

Now you’re asking?”

“I’m a physicist, physicists think about stuff.  Besides, we’ve finished the pizza.”

<sigh> “This conversation has gotten pretty improbable, if you ask me.  Oh, well.  Umm, I guess it’s two things.  The more-probable realities feel denser somehow, and more jangly. What got you on this track?”

“Conservation of energy.  Einstein’s E=mc² says your mass embodies a considerable amount of energy, but when you jump out of this reality there’s no flash of light or heat, just that fizzing sound.  When you come back, no sudden chill or things falling down on us, just the same fizzing.  Your mass-energy that has to go to or come from somewhere.  I can’t think where or how.”

“I certainly don’t know, I just do it.  Do you have any physicist guesses?”

“Questions first.”

“If you must.”

“It’s what I do.  What do you perceive during a jump?  Maybe something like falling, or heat or cold?”

“There’s not much ‘during.’  It’s not like I go through a tunnel, it’s more like just turning around.  What I see goes out of focus briefly.  Mostly it’s the fizzy sound and I itch.”

“Itch.  Hmm…  The same itch every jump?”

“That’s interesting.  No, it’s not.  I itch more if I jump to a more-probable reality.”

Very interesting.  I’ll bet you don’t get that itch if you’re doing a pure time-hop.”

“You’re right!  OK, you’re onto something, give.”

“You’ve met one of my pet elephants.”

“Wha….??”White satin and elephant

“A deep question that physics has been nibbling around for almost two centuries.  Like the seven blind men and the elephant.  Except the physicists aren’t blind and the elephant’s pretty abstract.  Ready for a story?”

“Pour me another and I will be.”

“Here you go.  OK, it goes back to steam engines.  People were interested in getting as much work as possible out of each lump of coal they burned.  It took a couple of decades to develop good quantitative concepts of energy and work so they could grade coal in terms of energy per unit weight, but they got there.  Once they could quantify energy, they discovered that each material they measured — wood, metals, water, gases — had a consistent heat capacity.  It always took the same amount of energy to raise its temperature across a given range.  For a kilogram of water at 25°C, for instance, it takes one kilocalorie to raise its temperature to 26°C.  Lead and air take less.”

“So where’s the elephant come in?”

“I’m getting there.  We started out talking about steam engines, remember?  They work by letting steam under pressure push a piston through a cylinder.  While that’s happening, the steam cools down before it’s puffed out as that classic old-time Puffing Billy ‘CHUFF.’  Early engine designers thought the energy pushing the piston just came from trading off pressure for volume.  But a guy named Carnot essentially invented thermodynamics when he pointed out that the cooling-down was also important.  The temperature drop meant that heat energy stored in the steam must be contributing to the piston’s motion because there was no place else for it to go.”

“I want to hear about the elephant.”

“Almost there.  The question was, how to calculate the heat energy.”

“Why not just multiply the temperature change by the heat capacity?”

“That’d work if the heat capacity were temperature-independent, which it isn’t.  What we do is sum up the capacity at each intervening temperature.  Call the sum ‘elephant’ though it’s better known as Entropy.  Pressure, Volume, Temperature and Entropy define the state of a gas.  Using those state functions all you need to know is the working fluid’s initial and final state and you can calculate your engine.  Engineers and chemists do process design and experimental analysis using tables of reported state function values for different substances at different temperatures.”

“Do they know why heat capacity changes?”

“That took a long time to work out, which is part of why entropy’s an elephant.  And you’ve just encountered the elephant’s trunk.”

“There’s more elephant?”

“And more of this.  Want a refill?”

~~ Rich Olcott