The second law of thermodynamics is one of the cornerstones of physics. Indeed, even among the most well-tested fundamental scientific principles, it enjoys a somewhat special status, prompting Arthur Eddington to write in his 1929 book The Nature of the Physical World rather famously:
The Law that entropy always increases—the second law of thermodynamics—holds, I think, the supreme position among the laws of nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations—then so much the worse for Maxwell's equations. If it is found to be contradicted by observation—well these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.But what, exactly, is the second law? And what about it justifies Eddington's belief that it holds 'the supreme position among the laws of nature'?
In order to answer these questions, we need to re-examine the concept of entropy. Unfortunately, one often encounters, at least in the popular literature, quite muddled accounts of this elementary (and actually, quite simple) notion. Sometimes, one sees entropy equated with disorder; other times, a more technical route is taken, and entropy is described as a measure of some thermodynamic system's ability to do useful work. It is wholly unclear, at least at first, how one is supposed to relate to the other.
I have tackled this issue in some detail in a previous post; nevertheless, it is an important enough concept to briefly go over again.
To me, it's most useful to think of entropy as a measure of how many microstates there are to a given macrostate of some thermodynamic system. Picture a room full of gas, like the one you're probably in right now: what you observe is primarily that the gas has a certain volume, temperature, and pressure. These characterize the macrostate. However, at a level unobservable to you, the gas consists of a huge number of molecules (roughly 1025 in a cubic meter). The positions of all of these molecules, their speeds, and the direction of their motions make up the microstate of the gas.
It's plain to see that I could change many details of this microstate without causing any observable change in the macrostate -- I could exchange an O2 molecule in the upper right corner with one in the lower left, or more generally a molecule here with a molecule their in a myriad ways, and nobody would notice. So there are a great number of microstates to the macrostate of the gas in your room that you observe; hence, the gas' entropy is very high (maximal, in fact).
However, if I were to move all the molecules of air in your room to one half of it, leaving the other utterly empty, you most certainly would notice a difference -- especially if you happen to be in the now empty half of the room!
But this situation (luckily) would not persist -- if I 'stopped the clock', carried every molecule laboriously towards the back half of the room, then restarted the clock again, the gas would nearly immediately expand in order to fill out the whole room again. The reason is that the gas, bunched up in one half of the room, has a lower entropy than if it fills out the whole room. It's fairly intuitive: there are now less changes I could make to the configuration of the molecules that would go unnoticed -- there are less places for the molecules to be, for starters. The number of configurations of gas molecules that correspond to the gas being bunched up in one halt of the room is much lower than the number of configurations that correspond to the gas filling the entire room -- there are less microstates to the former macrostate than there are to the latter.
In fact, there are rather enormously less states available to the gas that are bunched-up than that are room-filling. Thus, if I were to choose a random state for the gas from a hat, I would with a much higher likelihood draw one that fills the whole room, than one that only fills half of it. This entails, however, that any change to the state of a gas will likely lead towards states of higher entropy -- since there simply are more of those. Thus, the gas expands.
This connects immediately to the notion of entropy as measuring the ability of a system to do work -- if I were to insert a piston into the evacuated half of a room, the gas' expansion would drive the piston, which, for example, might help pull a load, or do any other kind of work. If, however, the gas fills the whole room, and I were to insert a piston, it would be pushed on from both sides equally, thus not doing any work.
It's important to notice that ultimately, the second law of thermodynamics is thus a simple law of probability -- more frequent (in the sense of configurations of the system), i.e. more probable, states occur more often; that's all there is to it. It seems impossible to conceive of any way to violate this law -- Eddington's confidence thus was well placed.
Maxwell's Demon
Despite the second law's firm foundation, however, for more than 100 years a simple thought experiment stood against it, seemingly irrefutable. This experiment was conceived by James Clerk Maxwell, most well known as the originator of the mathematical theory unifying electricity and magnetism, and it came to be known as Maxwell's Demon. Maxwell imagined an intelligent being that, unlike you, is capable of directly observing the microstate of the molecules in your room, in particular, their positions and velocities (we are, for the moment, imagining the molecules as classical entities; thus, the demon is not limited in his observations).
Now picture a wall separating both halves of the room, and in the middle of the wall, a very tiny trapdoor that the demon can open and close at will; since it turns on well-oiled (i.e. frictionless) hinges, no work is required to open or close it. Whenever the demon sees a fast molecule arrive from the left side, he opens the door and lets it through; when he sees a slow molecule approaching from the right side, he does the same. Gradually, the right side will heat up, and the left side again cool down. This heat differential, however, can be used to do work -- the demon has managed to find a way to get a system whose entropy was originally maximal to perform useful work, flying in the face of the second law!
But is that really what happened? Almost everybody, upon first being told of this thought experiment, suspects something fishy is going on; nevertheless, exorcising the demon has proven surprisingly hard.
In order to get a better handle on the problem, let us look at a variation devised by the Hungarian physicist Leó Szilárd known as Szilárd's Engine. He considered a greatly simplified version of Maxwell's original thought experiment: in it, a single molecule moves in only one direction through the room. The demon measures which half of the room the molecule is in. If it is, say, in the left half, he slides (fritionlessly, so as not to require work expenditure) in a wall dividing the room; then, he slides (again frictionlessly) a piston into the right half, and opens the wall again. The molecule will now bump against the piston, each time moving it a little; this motion can again be used to do work. The 'room', i.e. the engine, is in contact with a heat bath, some environment at a certain temperature; thus, the molecule picks the energy it transfers to the piston back up from the heat bath, leading to a cooling of the environment, and hence, a reduction of entropy. Szilárd was able to calculate the precise amount of work as equal to kTln(2), where k is Boltzmann's constant, T is the (absolute) temperature, and ln(2) is the natural logarithm of 2.
(It's a simple calculation -- the work a gas does through expansion is equal to the pressure times the change in volume; since the pressure changes with the volume, one has to perform an integration, i.e. sum up very small changes in volume multiplied with their corresponding pressure. Thus the work W = ∫pdV, where the integral runs from V/2 to V, V being the room's volume. By the ideal gas law, pV = kT for a 'gas' consisting of a single molecule, thus p = kT/V, and W = ∫kT/V dV = kT(ln(V) - ln(V/2)) = kTln(2V/V) = kTln(2), where I've used a nice property of the logarithm, ln(a) - ln(b) = ln(a/b).)
Physical Information
The interesting thing about this result is that it directly connects information with the physical notion of work, and thus, of energy -- the demon obtains one bit of information about the system, and, using nothing else, is able to extract energy from it. Indeed, it is not hard to see where the value for the extracted energy comes from: kln(2) is essentially just a conversion factor between entropy measured in bits, i.e. information-theoretic Shannon entropy, and thermodynamic entropy; multiplying this by the temperature T gives the amount of work one would need to perform on the system in order to reduce the entropy by kln(2) (or equivalently, one bit), or conversely, the amount of work the system can perform if its entropy is reduced by that amount.
Still, the disturbing conclusion persists: the demon can extract work from a system in thermodynamic equilibrium, i.e. at maximum entropy. But the realization that information is a physical entity is the key of what we need to save the second law.
The equivalence between '1 bit of information' and 'kTln(2) Joules of energy' does not hold just in the special case of Szilárd's Engine; rather, as Rolf Landauer, working at IBM, first noted, it applies generally. To see this, consider how information is stored in a physical system. For each bit, there must exist a distinguishable state of the system -- remember, information is 'any difference that makes a difference'. Now imagine the deletion of information. In order to achieve this, consequently, the number of states of the system must be reduced. But if one reduced the number of (micro)states of the system, this would entail a forbidden entropy decrease -- thus, the entropy elsewhere, i.e. either in an environment acting as an 'entropy dump' or in a part of the system not making up those states that are used to represent information, must increase by (at least) a compensating amount.
For a more concrete picture, consider a system with a 'memory tape' consisting of a set of boxes, each of which is like the room in Szilárd's engine, containing an '1-molecule gas'. If the molecule is in the left half of a box, it corresponds to a logical 0; conversely, if it is found in the right half, its logical state is interpreted as 1. Re-setting the tape, i.e. transferring it to the logical state 'all 0', for instance, then corresponds to halving each boxes volume, and thus, halving the number of microstates available to the system (all gas molecules can now only be in the left half of their boxes, versus being anywhere in the box). To do so, the gas in each box has to be compressed to half its volume -- which is the inverse of the expansion process analysed in the previous section, and thus, necessitates a work equal to kTln(2) done on each box.
Information deletion thus always incurs the price of entropy increase, and consequently, the production of waste heat. One way to view this is that a 'deleted' bit is expelled into the environment in the form of kTln(2) Joules of heat.
It was Charles Bennett, a colleague of Landauer at IBM, who noticed that using these ingredients, the puzzle of Maxwell's demon could finally be solved. Key to this resolution is the realization that the demon itself must be a physical information-processing system (of course, one could posit it to be some supernatural being, but this spoils the debate, as nothing sensible can be said by physics about the supernatural by definition). In particular, he thus must have a finite number of internal states he can use to represent information, in other words, a finite memory. So, at some point after he has started taking data and extracting work, he will have to start making room for new data -- delete information, in other words. But this will create an entropy increase, and thus, waste heat, of precisely kTln(2) Joules per deleted bit -- just the same amount he has previously extracted from the system for free! Thus, all the work he got out of the system, he eventually will have to pay for in entropy increase. The second law is, finally, save!
Hypercomputation
Or is it? *cue dramatic music*
Bennett's analysis essentially assumes Maxwell's demon to be a finite-state automaton, a notion of computing machine somewhat weaker than a Turing machine in that it has only a bounded amount of memory available. This certainly seems reasonable, but it is not in principle impossible that there are physical systems that exceed the computational capacity of such a system, and assuming that there aren't assumes that the (physical) Church-Turing thesis holds, which roughly says that the notion of computability as embodied by Turing machines or equivalents exhausts physical computability, i.e. that there is no function that can be computed by some means that can't be computed by a Turing machine, or that there are no means of computation more powerful than Turing machines. Such a means, implemented as a concrete device, is called a hypercomputer.
This is closely tied to one of the central theses I wish to explore on this blog: that the universe itself is computable in Turing's sense, i.e. that in principle a Turing machine exists that can compute ('simulate') the entire evolution of the universe. Certainly, if this is true, then there can be no physical hypercomputers.
The current state of matters is such that physical theories seem to imply that the universe isn't computable; I have previously argued against this view, and will now try to use what we have learned in this post to mount another attack.
The main culprit standing against the computability of the universe is the (apparent) continuous nature of spacetime. This continuity implies that there are as many points in an interval of spacetime as there are real numbers; however, there are only as many Turing machines as there are natural numbers, so most of these can't be computed -- the continuum is not a computable entity. This can be exploited in order to achieve computational power greater than that of any Turing machine; two concrete proposals along this line are Blum-Shub-Smalle machines and Hava Siegelmann's Artificial Recurrent Neural Networks, or ARNNs.
Now let's suppose that things actually are that way -- the continuum is real, the Church-Turing thesis false. If we gave Maxwell's demon access to the computational power inherent in the continuum, does this have any bearing on Bennett's conclusion?
It is easy to see that this is indeed the case. Imagine that, for his memory, we gave the demon a certain continuous interval to work with. He could now use the following procedure to encode his observations of the molecule: if the molecule is found in the left half, discard the left half the interval; if it is in the right half, correspondingly discard the right half of the interval (this 'discarding' is not meant in the sense of deleting something --rather, one might imagine two dials, such that if the left half is discarded, the left dial is moved half the remaining space to its maximum value, and analogously for the right one). Since the continuous interval is infinitely divisible, this process can go on forever. Knowledge of the original interval and the remaining part encodes the precise series of measurement outcomes.
The demon thus never has to delete any information, and consequently, never incurs the entropy penalty for deletion. He can produce useful energy forever, creating true perpetual motion. Thus, in a non-computable universe, violation of the second law seems possible!
However, the strength of the argument on which the foundations of the second law rest -- remember, stripped to its essentials, it is just the observation that more likely states occur more often -- lets me conclude that this argument in fact should not count in favor of the possibility of perpetual motion, but rather, against the possibility of a non-computable universe. If we remember our Eddington:
If your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.This conclusion, I must admit, is not quite rigorous -- it might be the case that the universe is non-computable without admitting physical hypercomputation. This is, however, a strange kind of position: it forces us to affirm the existence of a phenomenon for which no physical evidence, i.e. no observably hypercomputational processes, can be found -- thus in principle leaving open the possibility of formulating a description of the world in which the phenomenon is entirely absent, without being any less in agreement with observational data. As Ockham would tell us, in that case, the latter hypothesis is the one we should adopt -- meaning that the reasonable stance in this case would be to believe in the computability of the universe, too.
Keine Kommentare:
Kommentar veröffentlichen