Tuesday, October 23, 2018

Loud black holes

Stephen Hawking's most famous contribution to physics was the proposal that black holes might actually glow, like any hot object even if it is perfectly black, when Einstein's General Theory of Relativity is combined with quantum mechanics. It's not just that black holes could be hot and glow, but that they are inherently hot (or at least a little bit warmer than absolute zero), with a temperature inversely proportional to their mass. It remains unclear whether Hawking's theory is actually true, but even just as a hypothesis it has inspired a generation of theoreticians, because it suggests some kind of profound connection between relativity, quantum mechanics, and thermodynamics. 

We haven't yet been able to test Hawking's hypothesis astronomically, because even though we've found things far out in space that we're sure are black holes, the thermal glow they would have if Hawking were right would still be too faint to see. If we could get a little black hole in a lab here on Earth, we might make revolutionary observations ... but we probably shouldn't try this.
 
Let's not do this.


An idea from William G. Unruh might let us study black holes here on Earth in a safe way, however, by simulating them with fluid dynamics. What Unruh pointed out is that sound waves in a flowing fluid behave mathematically just like light waves in the curved spacetime of General Relativity. The sonic analog of a black hole—a sonic ergoregion—is a region in which the fluid itself is flowing faster than its own speed of sound. Sound waves can pass into this region, but cannot get back out again, because the supersonic fluid drags them along faster than they can run.

Unruh's fluid analog idea has been pursued by quite a few people over the years. Back in 2001 Luis Garay, Ignacio Cirac, Peter Zoller and I initiated the theoretical study of how to make sonic black holes in Bose-Einstein condensates. BECs would be great candidates, we thought, because they are very controllable (so we might get a supersonic flow going), they are superfluid (so no friction to slow down the flow), and they are quantum mechanical (so Hawking's effects might show up). Other people have greatly extended this work, and just a few years ago Jeff Steinhauer actually did some experiments on supersonic condensates and found quantum-correlated sound waves coming out of the sonic black hole, very much like the quantum radiance of Hawking's theory.

"Beyond the mountains, more mountains" is a Haitian saying. Now that we've seen Hawking effects in a lab here on Earth, we have new questions. One concerns a simple geometrical issue. Our 2001 work assumed a one-dimensional model for simplicity, and everyone after us seems to have maintained this convention, even up to Steinhauer's experiments, which use long, thin clouds of ultracold gas. A subtlety that we and everyone else seem to have ignored, however, is that Unruh's analogy between fluid flow and curved spacetime really needs at least two spatial dimensions. In 1D it only kind of works, roughly.

There are other good reasons to look at black holes in more dimensions as well. Bekenstein's tantalizing concept of purely geometrical entropy in black holes needs at least two dimensions. So in the past year we finally bit the computational bullet and did some theoretical simulations of two-dimensional sonic black holes. What we found is an instability that doesn't show up in one dimension: the formation of vortices. Our sonic ergoregion became a turbulent sea of quantized whirlpools. It generated plenty of sound waves, but not the kind of sound predicted by Hawking.

Vortices and sound waves from a sonic black hole. The power spectrum is not thermal.
So is this bad news for analog black holes? That depends on how you look at it, because the analogy between fluid flow and curved spacetime only ever works as far as mild sonic ripples. Once the fluid starts doing violent things like forming vortices, the analogy breaks down. There's no reason to expect nonlinear quantum fluid dynamics to reproduce nonlinear quantum gravity.

So one interpretation of our results would be to say that the appearance of vortices marks the end of the experiment, and only everything up to that point counts as a black hole simulation. As long as we only have low-amplitude sound waves moving through a smooth, steady flow, Unruh's analogy still works, and Hawking's theory should still be confirmed.

The problem with this, though, is that the low-amplitude regime in which everything works is not the regime that holds mysteries. We don't need cutting-edge experiments to solve linear equations, and anyway there was never any doubt about Hawking's linear calculations as far as they went. The big question is whether black hole thermodynamics survives in fully nonlinear quantum gravity. 

Unruh's analogy was never supposed to go beyond the linear regime, though. So why have we even been bothering with analog black holes at all, if they are inherently incapable of answering our only real questions? 

Well, when it comes to quantum gravity, we are desperate. We've stopped hoping for reliable answers. We'd be glad if we could just get some hints. And although nonlinear quantum fluid dynamics certainly won't reproduce nonlinear quantum gravity, it does at least give us a real, physical system which has nonlinear quantum dynamics and can also look like curved spacetime in some situations. 

We don't actually know anything more than that about real quantum gravity. Recent calculations based on string theory have suggested that real black holes out in space may be quantum "fuzzballs" that are quite different, inside their ergoregions, from the classical black holes of General Relativity. Our turbulent ergoregions do happen to look something like this.

And that may well be a meaningless fluke. It still means that experiments on sonic black holes in Bose-Einstein condensates will be able to put some data points on the big empty graph of nonlinear quantum systems with even limited resemblance to General Relativity. We know so little about quantum gravity, and it would be so wonderful to learn about black hole thermodynamics. It's well worth listening to what the sonic black holes have to say.

Interested readers can find our paper in the open-access online journal New Journal of Physics. (NJP is a joint venture of the Institute of Physics and the Deutsche Physikalische Gesellschaft. It's a mainstream peer-reviewed journal that is one of the standard examples to show that not all open-access online journals are scams.)


Wednesday, August 23, 2017

Tiny Quantum Engines

The previous post mentioned some work on minimal mathematical models for combustion engines, and ended with a link to our first published paper on this subject. That's the paper in which we introduce the class of dynamical systems that we call Hamiltonian daemons. The Hamiltonian part of this name refers to William Rowan Hamilton's formulation of classical mechanics. Hamilton's restatement of all Newton's F = ma stuff dates from 1833 but is still the dominant dialect in physics today, mainly because it adapts well to quantum mechanics.

The daemon part of our name for our systems is partly meant to recall Maxwell's Demon. The devout Presbyterian Maxwell wasn't thinking of an evil spirit: he meant the kind of benevolent natural entity that was the original meaning of the Latin word daemon. Maxwell's daemon was an imaginary tiny being who could manipulate the individual atoms in a gas.  Physicists still think about Maxwell's daemon today, as a way to express questions about the microscopic limits of thermodynamics.

Our daemon name is also partly in analogy to Unix daemons, which are little processes that operate autonomously to manage routine tasks without user input. Hamiltonian daemons are mathematical representations of small, closed physical systems which, without any external power or control, can exhibit steady transfer of energy from fast to slow degrees of freedom. In this way they are ultimate microscopic analogs of combustion engines, and learning about them may teach us about the role of thermodynamics on microscopic scales.
A simple Hamiltonian daemon: like a tiny car driving uphill. 
(The figure in our published paper does not have the little 
pictures of cars. Physical Review is too serious for that.) 

To illustrate how daemons work we took a simple model that resembles a tiny car that tries to drive up an infinite hill. The car's engine has to start in the old-fashioned way of getting it turning by pushing the car, so the car starts out already rolling uphill at some speed. Since it's going uphill, it slows down; some of the time (the dashed curve), the engine fails to ignite and the car just keeps slowing down until it rolls back down the hill. 

Other times, however, the engine catches (solid curve). The car then keeps on driving uphill at a steady speed, expending high-frequency fuel. Since our simple hill is infinite, when the fuel runs out the car will eventually roll back down, but if we add the complication of making the hill level off at some point, the car can climb to the top and then drive on to some goal. 
Schrödinger's Cat 
as a power source?

More recently we thought, "Hey, really small engines should be quantum mechanical." What would happen if you tried to extract work from a fuel-powered engine that was entirely quantum? What would a quantum Hamiltonian daemon be like? Well, we found out. 

It turns out that the quantum daemon behaves rather like a random daemon. Sometimes the engine ignites, and sometimes it doesn't—even if you do everything exactly the same every time. The quantum daemon also burns its fuel quantum by quantum, and it keeps driving uphill by giving itself a series of kicks that suddenly bump up its speed. After every such kick, there is a chance that the quantum engine will spontaneously stall, and refuse to run further, even if there is fuel left. So over time the quantum probability distribution for the height of the car develops a series of branches, as in the figure here.
The quantum daemon burns fuel in quantized steps, 
and can randomly stall.

It's not just randomness going on at every kick, though. If you look closely you can see interference fringes in the probability distribution. Since the whole daemon system is closed, the evolution is actually all unitary. Every branch of the figure, which represents a different possible trajectory of the car, is in coherent quantum superposition. Since there is more fuel left if the engine stalls sooner, but less fuel if the engine goes on longer, the superposition of all the branches is a total quantum state with high quantum entanglement between fast and slow degrees of freedom. Since the number of branches grows over time as the quantum daemon operates, the von Neumann entropy of the slow degrees of freedom increases.

Readers who are familiar with quantum mechanics may want to read more in Physical Review E 96, 012119 (2017), or in the free and legal ArXiv e-print version. Our first paper on daemons (Phys. Rev. E 94, 042127 (2016)) is also on ArXiv.

Friday, January 13, 2017

Humphrey Potter and the Ghost in the Machine

Humphrey Potter adds strings to Mr Newcomen's engine so that he can go play.
The first steam engines were slow-working beasts that needed constant human tending. To work its minutes-long cycle, the first Newcomen steam engine needed a human hand to open and close valves for cold water and steam. The machine provided great power, but someone had to control it.

At some point, however, some brilliant mechanical mind noticed that the machine was, after all, creating its own motion. Why not connect the moving piston to the valves, and let the engine run itself?

Some sources even claim to identify the person who first made this brainstorm work: a young boy named Humphrey Potter, who was paid to operate a Newcomen engine by hand. Young Potter hooked up a system of “strings and latches” that made the machine itself do his work for him. Then he ran off to play.

Apparently the sources for this story are not considered reliable. I can’t find any of them myself, and the modern texts that mention Humphrey Potter refer to his story as a legend. That’s a shame, because I’d like to think I could put a name to the person who first made a useful engine run all by itself. 

Apart from the practical advantages of getting a machine to run itself, the scientific point of lazy Potter’s clever trick is that it took any form of intelligence out of the loop of doing work. The entire engine process, from fuel combustion to pumping water, was now a purely mechanical operation. Humphrey Potter banished the ghost from the machine. He made it clear that everything that was occurring, in the marvelous process that turned lumps of coal into useful work, was occurring strictly under the basic laws of physics. It all ran all by itself.

In one way we take this insight for granted now, and may even extend it to processes more complex than lifting weight by burning coal, processes like life and consciousness. Yet in a very practical sense science still has not fully taken the point that engines can run as closed systems, without external power or control, and without any ingredients beyond basic physics. Engineers invoke higher level concepts like pressure and temperature, and while these are clearly valid, they leave a lot of details hidden under the hood. Theoretical physicists analyze parts of the whole process, like how gas particles adapt their motion when a piston moves in a predetermined way, but they do not simultaneously consider how the piston motion is itself determined by the gas particles bouncing off it. We don’t really believe there is any ghost in the machine, but when we have to explain how the machine works, somehow we keep sneaking the ghost back in, in some form or other.

If there's something strange
in your phase space neighborhood ...
At least, until now. Quite recently we have discovered a very simple mathematical model for a minimal kind of combustion engine, that runs as a closed system under basic physics. So we now have a bare-metal, first-principles model for an engine. Its operation is in some ways very reminiscent of a steam engine, but in other ways it is radically different. We are hoping that it may teach us about the microscopic roots of thermodynamics, but someday, perhaps, it might be the basis of a whole new class of power nanotechnology.

Readers who know Hamiltonian mechanics may enjoy our first paper on this subject. "Hamiltonian analogs of combustion engines: A systematic exception to adiabatic decoupling" is published in Physical Review E 94, 042127 (2016), and is also available in e-print form at https://arxiv.org/abs/1701.05006.

Wednesday, October 1, 2014

Newton versus Heisenberg

Death of a microbe
The Heisenberg Uncertainty Principle has become famous well outside quantum physics, but it is often cited in garbled form. I once found it mentioned in a textbook on social science research methods, where it was defined as the fact that electron microscopes can damage samples when they observe them by electron bombardment. Physicists roll their eyes at such misunderstandings, but I think that physicists themselves often misunderstand the Uncertainty Principle, by confounding Heisenberg’s specifically quantum mechanical relation with a principle of measurement that goes back to Newton. 
What the Uncertainty Principle really means, in practical terms, is that if you construct an apparatus to control one experimental property more tightly, so that the variations in its value between different repetitions of the experiment will be smaller, then past a certain point there will always be some other thing that becomes correspondingly less well controlled, so that its run-to-run variations become wilder. The ‘certain point’ at which this trade-off sets in represents a degree of precise control so high as to be quite unattainable anyway in the macroscopic world. So although the Uncertainty Principle is profound, it is really irrelevant to fields like social science research.

Observing something means letting it act on you.
What even many physicists think the Uncertainty Principle means, however, is something that really is widely relevant: the fact that no measurement can ever be purely passive, but always affects the thing being measured. This principle is both true and important, but it is not specifically quantum mechanical.

Observation is a physical process. A meter can only register the position of an object if there is some interaction that makes the object’s position act on the meter. Newton told us, long before Heisenberg, that this means that any meter is also going to react upon the the thing it measures. Such ‘observer effects’ are apt to be important when large meters measure tiny things; the Heisenberg Principle, however, is not this, but an additional complication in microscopic measurements.

In the early days of quantum mechanics, critics of the new theory tried to argue that the Uncertainty Principle was not self-consistent, by describing hypothetical experiments that would obey the Principle in each individual process, but yet still lead to an indirect violation of the Principle as an end result. These arguments all had subtle flaws, and the most famous flaws involved reaction effects. Thus the only really solid connection between the Uncertainty Principle and measurement reaction is historical.

Newton observes Heisenberg stealing credit for his ideas.
Newtonian reaction in physical measurements is a distinct concept from Heisenberg Uncertainty, but it does at least seem that one must get the former right in order to understand the latter. So perhaps there really is some deep connection between them. Until that connection comes to light, however, anyone who wants to relate observer effects in general to a basic principle of physics should really be citing Newton, not Heisenberg.

Monday, October 28, 2013

Opposites: Open and closed

Are they really so opposite?
As Niels Bohr used to say, the opposite of an ordinary truth is a falsehood, but the opposite of a profound truth is another profound truth. I offer this opposite pair:

1) All systems are open.
2) All systems are closed.

A closed system is a set of physical things which can be regarded as isolated from the rest of the universe. An open system, in contrast, is affected by things outside itself, even if those things are not directly observed. So these statements are certainly opposite. How are they both true?

What defines 'the system'?
Experiments try to isolate variables, but we can never achieve perfect isolation. Vacuum chambers are made of steel walls, and over time a few stray gas atoms always percolate in and out of tiny cracks or pores in the steel surface. No laboratory building is perfectly insulated from vibrations. High energy cosmic rays can pierce any barrier; and so on. It may be possible to achieve isolation that is excellent for all practical purposes, but all physical systems are open, strictly speaking. 

If we really want to speak strictly, however, then the very concept of a ‘system’ is inherently an approximation. There is really only one system: the universe. The universe as a whole is closed by definition, so all systems are closed. Of course, it is no less impossible in practice to describe the whole universe than it is to seal off a portion of the universe in perfect isolation from the rest. It is often possible, however, to describe a very large closed system.

And indeed this is precisely what we normally do, to identify the distinctive physical features of an open system: we analyze a large closed system, and then discard all the information that does not refer directly to the small sub-system that represents our ‘open system’.

So any system is open, if we want it to be: it is only a matter of how low we set our threshold for ignoring slight influences from external factors. Conversely, however, any system is closed, if we want it to be: it is only a matter of how large we are willing to make our system, to bring relevant external factors within its frame. The distinction between open and closed systems is an important one, but it is not a distinction between two different ways things can really be. It is a distinction between two different ways of thinking about things. Both ways can be good ways of thinking. Both truths are profound.


An engine would still run inside a large box.
It seems to me that too many physicists today have lost sight of the second truth, however. The most profound mystery that physics still faces is the origin of irreversibility. We don't understand why we can't remember tomorrow. And whatever is going on in quantum measurement, it seems to be an empirical fact that all quantum measurement devices rely crucially on thermodynamically irreversible processes to achieve their extreme amplification. No-one can find a clear explanation of irreversibility within closed-system Hamiltonian mechanics, but few people want to accept that our mighty science is still stumped by such a basic question after a century of breakneck progress, so most people like to think that the open system generalization must be the simple solution.

Open systems can't be the basic explanation of irreversibility, because all systems are also closed. Whether or not a system is open is not a physical fact, but an arbitrary choice of perspective in deciding what to include within the system. So the openness of physical systems cannot make a fundamental difference to anything; anything that can be explained as an open system must also be explicable as a larger closed system. A steam engine would still run, at least for a good long time, inside a big impermeable box.

Tuesday, February 19, 2013

A Cup of Heat, Monsieur?

Pierre-Simon de Laplace and Antoine-Laurent de Lavoisier
thought that heat was an invisible liquid.
18th century physicists thought of heat as something much like electricity: an invisible fluid that could flow through other materials or soak into them. They named this hypothetical fluid ‘caloric,’ and it was thought to be a distinct form of material, like air or water, only different. If an object had absorbed a lot of caloric, like a sponge soaking up water, then it was hot. If the caloric drained out, the object cooled.

This was a sensible theory. We all know that objects can become electrically charged and that this changes their properties. A charged balloon can stick to the ceiling, or make your hair stand on end; a charged metal sphere can give you a zap. Electric current really is an invisible fluid, composed (most usually) of material particles called electrons. Most of the time they are bound up in atoms, but when they come loose they can flow into things, or out of them or through them. Things can become charged by soaking up extra electrons.

Soaking up electric charge. 
(Image by Wikimedia user Dtjrh2
used under Creative Commons license.)
In a similar way, it would seem, objects can also soak up heat. Hot objects may expand or cause burns, just as charged objects stick to ceilings or shock people. Heat flows from high to low temperatures, just as electricity flows from high voltage to low. Appealing as it is at this basic level, the analogy turns out to work well even in finer detail. Antoine Lavoisier and Pierre Laplace developed an extensive body of caloric theory that was able to explain heating and cooling and many other thermal phenomena with quantitative accuracy. 

So in the eighteenth century it only made sense to think of heat as something similar to electrical charge. In the following century, heat engines and electric motors would be developed in parallel, and engineers would still think of them in similar terms. Today again people are deciding whether to have a car powered by an electric motor instead of a combustion engine, and the differences seem to be ones of practical detail. 

Today we no longer believe that heat is a material fluid, however. Why not? It's not really as clear-cut an issue as textbooks often make it seem, because today our concept of matter is not as simple as it once was. We know that not even electrons are really these indestructible little specks of hard stuff: they can be created and destroyed in high-energy collisions. And in a lot of ways we still treat heat as if it were a material fluid.

The bottom line, though, is that even though electrons can be born and die, electric charge can't. If electrons appear or disappear they do so together with positrons, so that the net change in charge remains zero. The only way for an object to become charged is for charge to flow into it from elsewhere—or for opposite charge to flow out of the object. There is no way to simply create charge from stuff that is not charge. Heat, in contrast, can flow into or out of things—but that's not the only way to get heat. One can also create heat, without importing it or already having it. It's called friction.

Rub your hands together. Feel it? That's heat. 

You didn't just create any new material substance from nothing. Big Bangs and particle colliders aren't as easy as that. So heat is not a material fluid. What is it, then?

Whatever heat is, you've just made some. It's right there in your hands.

Thursday, February 14, 2013

Fire Glows

It's not just bright.
Humans discovered fire a long time ago, but for most of that time we only used it for warmth and light and cooking, rather as Bilbo Baggins used his magic ring for years just to avoid unwanted callers. Only in the 18th century did James Watt show up to play Gandalf, and reveal that our curious little trinket was the One Ring to rule them all. Fire has enormous power.

Even after centuries of technological progress since Watt, we still find it very hard to beat combustion as a source of power. Burning a tank of fuel releases enough energy to lift cargo all the way to the Moon, even with the horrible inefficiency of a rocket engine. Combustion provides energy, as one says, to burn. Why is fire such a tremendously greater power source than, say, clockwork springs or a windmill? I’ve never seen a clear answer to this question in any physics text, but I think I have found a succinct one of my own. 

Fire glows.
Light oscillates really fast.
The fact that fire glows demonstrates that fire is releasing energy from motions (of electrons in chemical bonds) with frequencies in the range of visible light. Those are very high frequencies, around 1014 cycles per second. As Planck taught us, energy is proportional to frequency. So if human energy needs are for motion at up to a few thousand RPMS, mere hundreds of cycles per second, combustion lets us tap energy resources on a scale greater by a factor of a million million. Combustion delivers so much energy, because molecular frequencies are so high.

This is what an engine somehow does.
It isn’t easy to gear all that power down by a factor of 1012 so we can use it, though. Electrons whir around in molecules far too fast for our eyes to follow. We can’t just throw a harness over them. Even if we could, they are very light in weight. They bounce off things, rather than dragging them along. To tap them for power, we need some clever way of gently bleeding off their enormous but very rapidly whirring energy, a tiny bit at a time.  There's more to it than just installing an awful lot of tiny gears. 

Getting fire to do work means transferring power across a huge frequency range. That's what thermodynamics is all about. The reason that thermodynamics doesn’t seem very much like the rest of physics is that energy transfer across a huge frequency range is an extreme case, in which certain otherwise obscure aspects of physics become very important. That makes them important in general, though, because high frequencies can deliver so much power. It's well worth learning how thermodynamics really works.

Raising Water by Fire

James Watt dramatically improved the steam engine, but he didn’t invent it. In his time, steam engines were already a practical and economical success. The machines of Thomas Newcomen and Thomas Savery had already begun the new era in human technology. 

Savery had a head for marketing as well as for steam. In 1702 he produced a pamphlet advertising his device as “An Engine to Raise Water By Fire”. His description may have been poetic, but it was literally exact. His engine pumped water by burning coal. Its killer application was draining coal mines. 

Humans may have discovered fire in distant prehistoric times, but the really useful thing about fire was only discovered in the 18th century. Never mind cooking or smelting metal or scaring wolves: fire can raise an awful lot of water. And if you can raise water, you can do pretty much anything, because raising water means you can exert force.

Savery’s and Newcomen’s engines were crude and simple, and by that I don’t mean that they were primitively made, rattling too much or leaking steam. They were just stupid designs, compared to Watt’s machines. They didn’t even use steam pressure to actually do their work, but just let the steam balance atmospheric pressure. Then they condensed the steam, by shooting in cold water, and let the suddenly unbalanced atmospheric pressure do the work. Savery’s engine didn’t even turn any moving parts, but just sucked water through pipes. It wasn’t so much more than a proof of concept, like the aeolipile.

Hindsight is 20/20, of course, and it’s not really fair to call Savery and Newcomen stupid. Watt’s proper steam-pressure engines also needed stronger boilers. The point is that even the crudest engines were such a quantum leap in power technology, compared to wind, water, or animal power, that they rapidly changed the world. In effect they turned lumps of coal into unprecedentedly huge amounts of practical work. Up until 1775, the Russian navy had been using two enormous windmills to drain its dry docks at Kronstadt; each time they drained the docks in order to work on a ship, the draining job took a year. When they installed a single Newcomen engine, it did the job in two weeks.

With coal-fired steam engines, the human capacity to exert physical force suddenly soared. Even today, the biggest problem with changing to power sources other than combustion is that fire can provide so much more power than, say, sunlight or wind. We humans keep thinking wistfully about switching away from combustion, to some form of clean energy, but we really want to maintain our current energetic lifestyle. We're like a big city lawyer who wants to quit the firm and become a social worker, but also wants to keep up the mortgage payments.

Why is fire so very good for raising water? I have some thoughts on this, based on the fact that fire glows.

Thursday, April 26, 2012

Vitruvian Machine

Sometime in the late first century BCE, the Roman architect Marcus Vitruvius Pollio described the oldest known steam engine: the aeolipile. Devices of this type were described again in the following century by Hero (or Heron) of Alexandria, who gave more detail about their construction. Aeolipiles were hollow metal tanks with angled vents, filled with water and mounted on a pivot over a fire. When the water inside the tank boiled, the jets of steam hissing out through the vents would make the whole tank spin.

While the later Hero is remembered more often than the earlier Vitruvius in connection with these ancient gadgets, Hero himself refers to even earlier work on them by another Alexandrian, Ctesibius. No writings by Ctesibius have survived, and although later writers attributed several inventions to him, they do not mention the aeolipile as one of them, so the actual inventor may have been even more ancient.

Hero’s explanations imply that aeolipiles were temple showpieces whose rotation astounded but did nothing useful. There are no records or remains to suggest that any form of steam power was ever applied practically in ancient times. But Vitruvius had a different notion of what aeolipiles were good for. He referred to the aeolipile almost as an ancestor of the particle accelerator: “a scientific invention” which could be used to “discover a divine truth lurking in the laws of the heavens.”

Vitruvius was still a classical writer. The book in which he mentions the aeolipile also expounds the theory of architectural proportion, based on the human form, whose illustration by Leonardo da Vinci would become the Renaissance icon of classical humanism: Vitruvian Man. Vitruvius had no idea of the enormous practical potential of steam engines. He mentions the aeolipile in a chapter on weather. The truths he learned from the spinning tank of steam were about wind, not heat and power.

The aeolipile was a toy. It could barely turn itself, much less produce the labor power of a single slave. A far greater advance in power technology than the aeolipile was the medieval invention of the collar harness, which let horses replace oxen as draft animals. The uselessness of the aeolipile is an excellent example of the vastly under-appreciated role of materials in technological development. Whoever invented the aeolipile was a brilliant scientist, who must at least partially have understood deep principles of force and reaction, and then made them work in a real device; but it would be two thousand years before vessels could be forged that would hold enough pressure to let steam power change the world.

Nevertheless the aeolipile did indeed demonstrate a divine truth lurking in natural law. It may not have shown just how much power a machine could deliver, but it showed that a machine could have power to move. It hinted at the future power of artificial heat engines to do work beyond the limits of the human body.

Wednesday, April 21, 2010

What is heat?


Heat is amazing. The energy you could in principle extract, by lowering the temperature of any amount of water by a barely perceptible one degree Celsius, would be enough to lift that same amount of water to a height of over four hundred meters. And the energy you could extract by condensing any amount of steam into liquid water would be enough to lift that same amount of water into space. This is why the Industrial Revolution was such a big deal. Harnessing heat, in fuel-burning engines that drive pistons or spin turbines with hot gases, is what has let us hair-challenged primates conquer the planet. Heat is magic. What is it?

Until the mid 19th century, people thought that heat was a special kind of fluid, like air or water but different, and invisible. They called it "phlogiston", or "caloric". Some considered that cold was a distinct fluid, "frigoric", while others argued that cold was simply absence of heat. These were by no means stupid or crazy theories. Electrical charge is a phenomenon which is about as basic and important as heat, and it really is carried by two different kinds of stuff, namely electrons and protons (as well as other much less common particles), which each carry opposite charge. Objects can become positively or negatively charged if they pick up excess protons or electrons. It was not silly to imagine that objects might become hot or cold by picking up excess caloric or frigoric.

But early physicists figured out that this was wrong, mainly from carefully observing how grinding metal keeps on making it hotter, even when the grinder and the metal are kept well apart from any other objects that might conceivably be able to inject a steady supply of caloric into them. They concluded that heat is actually some form of energy, and that the more familiar kinds of energy carried by moving objects can be converted into heat, through friction; while heat may in turn be converted into motion and useful work, in engines.

But then just what is the difference between heat and work, as forms of energy? It's not easy to get a straight answer even from a fully trained physicist, because the truth is that we're still not completely sure what heat is. If I have many bazillions of atoms all zipping around in a big box, bouncing rapidly off each other and the walls, making up a gas, then I can use statistical mechanics to say an awful lot about heat and pressure and temperature for this gas. But if I have one single atom, perhaps ionized and trapped in a strong electric field, I know that the concept of heat is not even relevant. With one atom, I can compute the motions of its nucleus and of its electrons, rather as I worked out the motion of solid objects in freshman physics. It does not even make sense to ask whether the atom is hot or cold. Usually no single atom has heat, but a billion atoms do. So heat is somehow an emergent property of large numbers of atoms together.

"Emergent property" is a fine bit of fashionable philosophical mumbo-jumbo, which spends rather too much time in the blogs of wild-eyed crackpots and tenured philosophers, to be comfortably welcome among respectable scientists. But in the case of heat, you can slurp an emergent property from a cup of coffee. Heat rules the world. It's quite concretely real. So what is it?

Well, we're working on it. There is ample precedent in perfectly well understood physics for new behavior to emerge in larger systems; it's just that in this particularly fundamental case of heat there are still some major obscurities in exactly how it works. But in just the past few years, atomic and optical physicists have gained the capability to make extremely precise and direct measurements on small samples of gas, with only hundreds to thousands of atoms. If heat emerges, we're soon going to be able to catch it in the act. Watch this space.

Sunday, June 14, 2009

Statistical Mechanics

Most people these days have heard of quantum mechanics, and how it somehow brings chance and probability into physics on a basic level. This is a misleading truth, because actually quantum mechanics is perfectly deterministic, and not probabilistic at all, until we come to measure anything. Then the probabilities come in, and only then. The problem is that quantum measurement is a very subtle thing that is far from fully understood. And one thing we do know is that inferring how things really are, from how things look, is uniquely tricky in quantum mechanics.

To appreciate the subtlety of quantum mechanics, it helps to know about the older and less tricky place that probability has in physics: statistical mechanics. I doubt that most non-physicists have ever heard of statistical mechanics. In many places one can even earn a Bachelor's degree in physics without ever taking a course in it. This is unfortunate, because statistical mechanics is so important, that a physicist who doesn't know about it is like a Scout who doesn't know that fire needs air. Statistical mechanics is a sort of post-processing stage that has to be performed on virtually all the rest of physics — even including quantum mechanics — in order to make sense of anything but the very simplest and most controlled experiments.

Mechanics without statistics is the physics we learn in school. A rock flies through the air, falling under gravity. Ignore air friction, and model the rock as a point with a given mass — a particle. Apply Newton's Laws to particles: that's mechanics.

'In principle,' we may be told, 'the universe is a large number of particles, governed by Newton's Laws.' In practice, of course, most of these particles are beyond our control, beyond our observation, or at least beneath our notice. We do not see the vast swarms of air molecules that surround us and fill our lungs. And even if we could mark their paths, solving Newton's equations for so many interacting particles is far beyond our computational power. Thus do we see the vast gap between the pristine principles of physics, and the practical real world.

Bah. Physics doesn't care about pristine. Sure, part of physics is about trying to reduce everything, 'in principle', to some elegant little Theory of Everything. We're writing one big long footnote to Plato, who wanted everything to boil down to the five regular polyhedra. But that whole grand unified simplicity thing is really the hood ornament of physics, not the engine. The thing that drives physics is putting principles into practice, and codifying practice into principle. So no, the fact that we can't follow every atom does not make a huge gap between physics and reality. There is a whole huge branch of physics which is all about the principles and the practice of dealing with huge numbers of particles that cannot be individually observed, predicted, or controlled.

And that is the branch of physics called statistical mechanics. It uses probability theory to get the best results we can from what we do know, in spite of what we don't. Whether or not God plays dice with the universe, physicists do, to make up for the fact that we're not God.

Saturday, May 30, 2009

Time Reversal

Physics is conflicted about whether the future is fully determined and just waiting for us to reach it, or is perhaps at least partly undetermined. The undetermined viewpoint is represented in statistical mechanics, and to some extent in quantum mechanics. The deterministic position is taken by all the rest of physics, including most of quantum mechanics. This raises questions about whether physics supports predestination or free will, but I don't want to look at these now. I want to focus instead on the difference between future and past. Why can't I remember tomorrow?

Most of physics says that future and past are both completely determined by the laws of nature, and so are both equally and completely definite. Most of deterministic physics even goes so far as to have 'time reversal symmetry' — the future and the past are fully equivalent. This really makes it hard to see why I can't remember tomorrow.

Time reversal symmetry is a startling fact. Suppose I have a system in an initial state — for example, a piece of paper that has just been touched by a flame. Over some time interval, the system develops (physicists usually say 'evolves', though this has nothing to do with Darwin) into some other state. In our example, the paper will of course catch fire and burn. Let's stop the clock once the flames have died down, and call the little pile of ash our final state. In fact the final state is not just the pile of ash: it includes emitted light and dust and carbon dioxide molecules, and everything that resulted from the paper burning. For simplicity we can consider the final state to contain light, smoke, and ashes, letting the smoke and light stand for everything else that is also involved. Let's imagine there are no walls around, so the flashes of light from the flames just continue flying outwards.

Burning is easy. Unburning is not.
Now consider a hypothetical 'mirror state' to this final state, in which every particle and light wave is in the same position, but moving in the opposite direction. Of course it would be very difficult to make this mirrored state in reality. It's easy to make a pile of ash surrounded by rising smoke and outgoing light. Just burn some paper. But it's hard to make a pile of ash surrounded by falling smoke and incoming light.

Still, the mirrored final state of our burnt paper is in principle a possible state of reality, as far as we can tell. Suppose we could somehow achieve it. What would happen next? If burning of paper is governed by laws that have time reversal symmetry, then what would happen would be exactly the burning process, in reverse. The light and smoke would fall inwards onto the ashes, and these would re-assemble themselves into paper. Plumes of hot gas would form over the re-assembling ashes during the unburning process, but these time-reversed flames would be only faintly visible, for they would only be emitting as much light as ordinary flames absorb; the copious light that normal flames emit, these time-reversed flames would be absorbing instead. In the end, though, the motion-mirrored ashes would have unburnt into crisp white paper.

Okay, such a bizarre process is in principle possible. Should we be surprised? In a world full of unthinkably huge numbers of particles and waves all jostling together, it may be no wonder that all kinds of things are in principle possible. We have acknowledged, however, that this unburning which is possible in principle would be difficult to achieve in practice, because it would be very hard to assemble the time-reversed mirror state of the light, smoke, and ashes. In fact it would be extremely hard, because not just any state of incoming light and smoke falling onto ash would do. We would need the precise mirrored version of the final state from burning. So all the smoke particles have to be lined up so they will fall exactly into place on the ashes, and the incoming light has to have exactly the right waveforms to meet the ashes at the right time to drive the chemical reactions to separate oxygen from CO2 and assemble the carbon into cellulose, and so on. The precision of control that would be needed is ridiculously unfeasible. Out of all the possible states that include ashes, falling smoke, and incoming light, all of which would be difficult states to create, only a fantastically tiny fraction are actually time-reversing mirror states for burnt paper.

The conclusion seems therefore to be only the reasonable one, that ashes unburning into paper is a stupendously unlikely event. Paper burning into ash, in contrast, is not at all uncommon; it is quite easy to arrange this. So, in spite of time reversal symmetry, there remains a clear difference between past and future. Yesterday's paper becomes tomorrow's ash, and never the other way around, because paper that is about to burn into ash is nothing special, while ash that is about to unburn — that is, the time-reversed mirror state of freshly burnt paper — could only be a fantastically fine-tuned set-up.

But here's the point. For every initial state of paper that is about to burn into ash, there is a precise final state of ash, rising smoke, and outgoing light. For every such state there is a precise mirror state of ash, falling smoke, and incoming light. And that is a state of ash that is about to unburn into paper. So for every possible initial state of paper that is about to burn into ash, there is a state of ash that is about to unburn into paper; and conversely, there are only as many ways to burn paper as there are to unburn it. So how can burning be nothing special while unburning is fantastic? How can it be that paper-about-to-burn states are so much more commonly seen than ash-about-to-unburn states? There are exactly as many of each kind of state, for they match one-to-one. If one kind of state counts as a fantastically fine-tuned set-up, why is the other one any less bizarre?

How can it be that burning is so much more common than unburning? Obviously it is. But why?

This is the puzzle of time reversal symmetry.

Saturday, May 9, 2009

Remember Tomorrow?

I can remember yesterday, so why can't I remember tomorrow? This may sound like a pretty silly question, but it's one that physics can't yet answer.

The apparently obvious answer is that yesterday has already happened, whereas tomorrow hasn't yet. But this isn't really an answer at all, because the real point of the question is just to ask, what is the difference between 'already happened' and 'going to happen'? And insist as one may that this difference is obvious, that does not amount to an explanation of exactly what the difference is.

Drifting on the river of time
Where is yesterday when it is gone? We can imagine that each moment simply vanishes as it passes. So perhaps yesterday does not exist at all, and our present memories of it might just as well be legends of an imaginary world. Or we can imagine that the past is just the country that stretches behind us as our awareness drifts steadily forward on the river of time. So perhaps yesterday is still there, and our memories are souvenir sketches of a real landscape to which we will never happen to return.

We can think of the future in the same two ways. Perhaps it doesn't exist at all until it becomes the present. Or perhaps it is all there waiting for us to arrive.

There seems to be some truth in both pictures of the future. Some future events occur as anticipated, while others appear as sudden surprises. Yet we can find the same mixture of definiteness and indefiniteness in the past, as well. Important experiences remain vivid in memory, but obscure details fall into oblivion as soon as they pass. So the distinction between future and past is not as simple as one being real where the other is only potential. The difference between how real the two are seems to be only a difference of degree, not of kind.

The difference of degree is there, however. Memory may be fallible, but it is usually more accurate than prophecy. We have to say that the landscape behind us on the river of time seems more detailed than the terrain still ahead.

Why can't we remember tomorrow? The answer really isn't obvious after all. It has something to do with causality. Perhaps it also has something to do with how some things are more important than others. And that's really the bottom line of what physicists can say, right now, about this startlingly basic question.

Starting Out


There are several other 'cold heat' sites on the Web, and they are fine sites dedicated to various things, but I think this is the only one that is literally about cold heat. It's about heating and heat flow in gases cooled to within a millionth of a degree above absolute zero temperature. (Even though they are so cold, the gases don't become solid, because they are kept at extremely low pressure.)

This is an active research field in current physics, and has been for about fifteen years. In 2001 the Nobel prize was awarded for a big experimental breakthrough on this stuff. It's a pretty technical topic, but I'm going to try to make it understandable.

Why read this? Well, the technical details of quantum gases are pretty heavy going, but the basic questions we're investigating are really profound. They go right to the heart of what matter is, what physics is, and how things really are.