The Edge of Pinkness

Susan Kim takes a sip of her mocha latte. eyes me over the rim. “That’s quite a set of patterns you’ve gathered together, Sy, but you’ve left out a few important ones.”

“Patterns?”

A log-linear plot

“Regularities we’ve discovered in Nature. You’ve written about linear and exponential growth, the Logistic Curve that describes density‑limited growth, sine waves that wobble up and down, maybe a couple of others down‑stack, but Chemistry has a couple I haven’t seen featured in your blog.”

“Such as?”

“Log-linear relationships are a biggie. We techies use them a lot to handle phenomena with a wide range. Rather than write 1,000,000,000 or 109, we sometimes just write 9, the base‑10 logarithm. The pH scale for acid concentration is my favorite example. It goes from one mole per liter down to ten micro‑nanomoles per liter. That’s 100 to 10-14. We just drop the minus sign and use numbers between 0 and 14. Fifteen powers of ten. Does Physics have any measurements that cover a range like that?”

“A handful, maybe, in theory. The limitation is in confirming the theory across a billion-fold range or wider. Atomic clocks that are good down to the nanosecond are our standards for precision, but they aren’t set up to count years. Mmmm … the Stefan‑Boltzmann Law that links an object’s electromagnetic radiation curve to its temperature — our measurements cover maybe six or seven powers of ten and that’s considered pretty good.”

“Pikers.” <but I like the way she grins when she says it>

“I took those Chemistry labs long ago. All I remember was acids were colorless and bases were pink. Or maybe the other way around.”

“You’ve got it right for the classic phenolphthalein indicator, but there are dozens of other indicators that have different colors at different acidities. I’ll tell you a secret — phenolphthalein doesn’t kick over right at pH 7, the neutral point. It doesn’t turn pink until the solution’s about ten times less acidic, near pH 8.”

Adapted from this file by Damitr, CC BY-SA 4.0

“So all my titrations were off by a factor of ten?”

“Oh, no, that’s not how it works. I’m going to use round numbers here, and I’ll skip a couple of things like the distinction between concentration and activity. Student lab exercises generally use acid and base concentrations on the order of one molar. For most organic acids, that’d give a starting pH near 1 or 2, way over on the sour side. In your titration you’d add base, drop by drop, until the indicator flips color. At that point you conclude the amounts of acid and base are equivalent, not by weight but by moles. If you know the base concentration you can calculate the acid.”

“That’s about what I recall, right.”

“Now consider that last drop. One drop is about 50 microliters. With a one‑molar base solution, that drop holds 50 nanomoles. OK?”

<I scribble on a paper napkin> “Mm-hm, that looks right.”

“Suppose there’s about 50 milliliters of solution in the flask. Because we’re considering the last drop, the solution in the flask must have become nearly neutral, say pH 6. That means the un‑neutralized acid concentration was 10-6 moles per liter, or one micromolar. Fifty milliliters at one micromolar concentration is, guess what, 50 nanomoles. Your final drop neutralizes the last of the acid sample.”

“So the acid concentration goes to zero?”

“Water’s not that cooperative. Water molecules themselves act like acids and bases. An H2O molecule can snag a hydrogen from another H2O giving an H3O+ and an OH. Doesn’t happen often, but with 55½ moles of water per liter and 6×1023 molecules per mole there’s always a few of those guys hanging around. Neutral water runs 10-7 moles per liter of each, which is why neutral pH is 7. Better yet, the product of H3O+ and OH concentrations is always 10-14 so if you find one you can calculate the other. Take our titration for example. One additional drop adds 50 nanomoles more base. In 50 milliliters of solution that’s roughly 10-6+10-7 molar OH. Call it 1.1×10-6, which implies 0.9×10-8 molar H3O+. Log of that and drop the minus sign, you’re a bit beyond pH 8 which sends phenolphthalein into the pink side. Your titration’s good.”

I eye her over my mug of black mud. “A gratifying indication.”

~~ Rich Olcott

The Latte Connection

An early taste of Spring’s in the air so Al’s set out tables in front of his coffee shop. I’m enjoying my usual black mud when the Chemistry Department’s Susan Kim passes by carrying her usual mocha latte. “Hi, Sy, mind if I take the socially distant chair at your table?”

“Be my guest, Susan. What’s going on in your world?”

“I’ve been enjoying your hysteresis series. It took me back to Physical Chemistry class. I’m intrigued by how you connected it to entropy.”

“How so?”

“I think of hysteresis as a process, but entropy is a fixed property of matter. If I’m holding twelve grams of carbon at room temperature, I know what its entropy is.”

“Mmm, sorta. Doesn’t it make a difference whether the carbon’s a 60‑carat diamond or just a pile of soot?”

“OK, I’ll give you that, the soot’s a lot more random than the diamond so its entropy is higher. The point remains, I could in principle measure a soot sample’s heat capacity at some convenient temperature and divide that by the temperature. I could repeat that at lower and lower temperatures down to near absolute zero. When I sum all those measurements I’ll have the entropy content of the sample at my starting temperature.”

“A classical definition, just what I’d expect from a chemist. But suppose your soot spills out of its test tube and the breeze spreads it all over the neighborhood. More randomness, higher entropy than what you measured, right?”

“Well, yes. I wouldn’t have a clue how to calculate it, but that goes way beyond Carnot’s and Clausius’ original concept.”

“So entropy has at least a thin linkage with history and hysteresis. To you chemists, though, an element or compound is timeless — lead or water have always been lead or water, and their physical constants are, well, constant.”

“Not quite true, Sy. Not with really big molecules like proteins and DNA and rubber and some plastics. Squirt a huge protein like catalase through a small orifice and its properties change drastically. It might not promote any reaction, much less the one Nature designed it for. Which makes me think — Chemistry is all about reactions and they take time and studying what makes reactions run fast or slow is a big part of the field. So we do pay attention to time.”

“Nice play, Susan! You’re saying small molecules aren’t complex enough to retain memories but big ones are. I’ll bet big molecules probably exhibit hysteresis.”

“Sure they do. Rubber molecules are long-chain polymers. Quickly stretch a rubber band to its limit, hold it there a few seconds then let go. Some of the molecular strands lock into the stretched configuration so the band won’t immediately shrink all the way down to its original size. There’s your molecular memory.”

“And a good example it is — classic linear Physics. How much force you exert, times the distance you applied it through, equals the energy you expended. Energy’s stored in the rubber’s elasticity when you stretch it, and the energy comes back out on release.”

“Mostly right, Sy. You actually have to put in more energy than you get out — Second Law of Thermodynamics, of course — and the relationship’s not linear. <rummaging into purse> Thought I had a good fat rubber band somewhere … ah‑hah! Here, stretch this out while you hold it against your forehead. Feel it heat up briefly? Now keep checking for heat while you relax the band.”

“Hey, it got cold for a second!”

“Yep. The stretched-out configuration is less random so its entropy and heat capacity are lower than the relaxed configuration’s. The stretched band had the same amount of heat energy but with less heat required per degree of temperature, that amount of energy made the band hotter. Relaxing the band let its molecules get less orderly. Heat capacity went back up. temperature went back down.”

“Mmm-HM. My hysteresis diagram’s upward branch is stretch energy input and the downward branch is elastic energy output. The energy difference is the area inside the hysteresis curve, which is what’s lost to entropy in each cycle and there we have your intriguing entropy‑hysteresis connection. Still intrigued?”

“Enough for another latte.”

~~ Rich Olcott

Hysteresis Everywhere

“We’ve known each other for a long time, ain’t we, Sy?”

“That we have, Vinnie.”

“So I get suspicious when we’ve specific been talking about a magnetic field making something else magnetic and you keep using general words like ‘driver‘ and ‘deviation‘. You playing games?”

“You caught me. The hysteresis idea spreads a lot farther than magnetism. It addresses an entire dimension Newton was too busy to think about — time.”

“Wait a minute. Newton was all about velocity and acceleration and both of them are something‑per‑time. It’s right there in the units. Twice for acceleration.”

“True, but each is really about brief time intervals. Say you’re riding a roller‑coaster. Your velocity and acceleration change second‑by‑second as forces come at you. Every force changes your net acceleration immediately, not ten minutes from now. Hysteresis is about change that happens because of a cause some time in the past. Newton didn’t tackle time‑offset problems, I suppose mostly because the effects weren’t detectable with the technology of his time.”

“They had magnets.”

“Permanent ones, not electromagnets they could control and measure the effects of. Electromagnetic hysteresis generates effects that Newton couldn’t have known about. Fahrenheit didn’t invent temperature measurement until two years before Newton died, so science hadn’t yet discovered temperature‑dependent hysteresis effects. The microscope had been around for a half‑century or so but in Newton’s day people were still arguing about whether cells were a necessary part of a living organism. Newton’s world didn’t have an inkling of cellular biophysics, much less biophysical hysteresis. At human scale, country‑level economic data if it existed at all was a military secret — not a good environment for studying cases of economic hysteresis.”

“So what you’re saying is that Newton couldn’t have tackled those even if he’d wanted to. Got it. But that’s a pretty broad list of situations. How can you say they’re all hystereseseses, … loopy things?”

“They’ve all got a set of characteristics that you can fit into similar mathematical models. They’re all about some statistical summary of a complex system. The system is under the influence of some outside driver, could be a physical force or something more abstract. The driver can work in either of two opposing directions, and the system can respond to the driver to change in either of two opposing ways. Oh, and a crucial characteristic is that the system has a buffer of some sort that saves a memory of what the driver did and serves it up some time later.”

“Wait, lemme see if I can match those pieces to my magnetic nail. OK, the driver is the outside magnetic field, that’s easy, the system is the magnetic iron atoms, and the summary is the nail’s field. The driver can point north‑to‑south or south‑to‑north and the atoms can, too. Ah, and the memory is the domains ’cause the big ones hold onto the direction the field pointed last. How’d I do?”

“Perfect.”

“Goody for me. So why are those guys on the radio saying the economy is hysterical, ‘scuse, has hysteresis? What’s which part?”

“Economies are complex beasts, with a lot of separate but interacting hysteresis loops. These guys, what were they discussing at the time?”

“Unemployment, if I remember right. They said the job market is sticky, whatever that means.”

“Good example. Here’s our basic hysteresis loop with some relabeling. Running across we’ve got our driver, the velocity of money, which claims to measure all the buying and selling. Up‑and‑down we’ve got total employment. The red dot is the initial equilibrium, some intermediate level where there’s just enough cash flowing around that some but not all people have jobs. Then a new industry, say cellphones, comes in. Suddenly there’s people making cellphones, selling cellphones, repairing cellphones –“

“I get the idea. More activity, money flows faster, more jobs and people are happy. OK, then the pandemic comes along, money slows down, jobs cut back and around we go. But where’s the stickiness?”

“In people’s heads. If they get into Depression thinking, everyone holds onto cash even if there’s a wonderful new cellphone out there. People have to start thinking that conditions will improve before conditions can improve. That’s the delay factor.”

“Hysterical, all right.”

~~ Rich Olcott

Elephant And Pengy

(a hat-tip to Mo Willems, whose Elephant and Piggy books helped my grandkids discover reading)

“Hey, Sy, how come my magnetized nail’s hysteresis loop is so wide? It makes sense that the end‑case magnetizing happens because all the iron atoms get lined up in one direction or the other. But why ain’t the blue up‑curve right on top of the down‑curve?”

“Why do you think it should be, Vinnie?”

“Well, the red curve’s different because you got the outside field herding the iron atoms into domains where they all point the same way and that makes the nail’s magnetism grow from zero, and then the domains that agree with the outside magnetic field eat up the other domains until like I said they saturate. But on both sides of the blue loop the domains already exist, right, so the herding’s all done. Up or down it’s only domains growing and shrinking. Seems to me that the curves oughta be the same.”

“They are, near as I could draw them. You’re just not looking at them right. Rotate it 180°, see how they match up.”

“How ’bout that, they do, mostly. What’s going on?”

“You picked up that the vertical axis represents strength and direction, but you missed that the horizontal axis also represents strength and direction. Neither axis starts at zero, they’re both centered on zero. The driver is the outside magnetic field. No strength in the middle, increasing north‑bound strength to the right, south‑bound strength to the left. Start at the head‑end‑north corner and go down branch 2. The north‑bound driver strength decreases. That relaxes some of those north‑pointing domains and the nail’s net magnetism decreases just a bit. When the outside field’s strength gets down to neutral, about at the upper arrow, the nail’s still strongly magnetized. Most of the domains remember which way they were pointing. That’s the history that makes this hysteresis. The domains stay there until the outside field gets strong enough south‑bound to make a difference. That grows the south‑bound domains at the expense of northbound ones. All that goes on until we get to saturation at head‑end‑south corner and then we run exactly the reverse sequence. For most materials, the two extreme fields have the same strength, just opposite directions.”

“Wait, you said ‘for most materials.’ Different materials have different widths on that picture?”

“Good catch. Yes, there’s ‘hard‘ ones like rare earth magnets. They have a really wide hysteresis loop you can’t demagnetize without a really strong field. That’s good for where you want a permanent magnet that you don’t want to have to recalibrate, like on a spacecraft bound for Jupiter. You’d want a ‘soft magnet‘ with a narrow hysteresis loop for something like a transformer core that has to switch polarity sixty times a second.”

<longish contemplative silence> “Sy, I just got a great idea! And it uses that entropy elephant stuff you wrote about.”

“All right, out with it.”

“OK. When the nail is magnetized, it’s got all or at least most of its iron atoms pointing in the same direction, right? And when the outside field demagnetizes it, the atoms point all over the place, right? So the not‑magnetized nail has randomness, that’s entropy, and the magnetized one doesn’t. Where did the entropy come from? Gotta be from the outside, right? Can we use this to like suck entropy out of things?”

“Right, right and sorta right. I’m not happy with the idea of pumping entropy around. What’s really in play is energy, sometimes as magnetic field energy and sometimes as heat. You’ve got the core idea for a magnetic refrigerator. Put a field‑magnetized transfer material in contact with what you want to cool, then turn off the outside field. Heat from the target flows into the material, jiggles the atoms and scrambles the magnetization. Break the contact, cool and re‑magnetize the material and repeat. The idea’s been around since the late 1800s. The problem has been finding the right material to make it work. The best stuff has a tall, narrow hysteresis loop so it can be strongly magnetized yet forget it easily. Researchers have finally found some good candidates.”

“Too late to the party, huh?”

“Sorry.”

~~ Rich Olcott