In the last post, we have familiarized ourselves with some basic notions of algorithmic information theory. Most notably, we have seen how randomness emerges when formal systems or computers are pushed to the edges of incompleteness and uncomputability.
In this post, we'll take a look at what happens if we apply these results to the idea that, like computers or formal systems, the physical world is just another example of a universal system -- i.e. a system in which universal computation can be implemented (at least in the limit).
First, recall the idea that information enters the description of the physical world through viewing it as a question-answering process: any physical object can be uniquely identified by the properties it has (and those it doesn't have); any two physical objects that have all the same, and only the same, properties are indistinguishable, and thus identified. We can thus imagine any object as being described by the string of bits giving the answers to the set of questions 'Does the object have property x?' for all properties x; note that absent an enumeration of all possible properties an object may have, this is a rather ill-defined set, but it'll serve as a conceptual guide.
In particular, this means that we can view any 'large' object as being composed of a certain number of 'microscopic', elementary objects, which are those systems that are completely described by the presence or absence of one single property, that may be in either of two states -- having or not having that particular property. Such a system might, for instance, be a ball that may be either red or green, or, perhaps more to the point, either red or not-red. These are the systems that can be used to represent exactly one bit of information, say red = 1, not-red = 0. Call such a system a two-level system or, for short, and putting up with a little ontological inaccuracy, simply a bit.
Doing a 'measurement' on a two-level system then amounts to finding out which state it is in; that is, asking of it the question: 'Are you in state r?', where r here may be taken for red, for instance. The outcome of the measurement then represents the answer to this question: 1 for yes, 0 for no. The measurement on a compound object may then be taken as the composition of measurements 'asking' about all the properties the object might have; it will thus yield a bit string uniquely identifying the object.
Now, this is, in a sense, an overly literal presentation: the gist really is just that any object has a unique description, and that description contains a finite amount of information; I needn't have talked about bits or two-level systems at all, but for concreteness, it is convenient to frame the discussion in these kinds of terms.
In any case, we now see that making a measurement on some object or physical system amounts to asking questions of this system. But what if it doesn't know the answer? That is, what if, the information contained in a system being taken to correspond to something akin to the information contained in some set of axioms, the answer to some question asked of the system through measurement doesn't follow from that information, i.e is formally independent of it? Putting it yet differently, what if the system can't compute an answer?
Thanks to the discussion in the previous post, we know that what we should expect in such a case is the emergence of randomness -- the emergence, thus, of facts not reducible to the information contained in a system (by the way, I largely use the terms 'object' and 'system' in an interchangeable manner -- mostly, whenever I think of something as just passively sitting there, being subject to some kind of procedure, it's an object to me, while if I think of it as actively doing things, such as computing an answer to a measurement, I think of it as a system -- but these two views really just constitute a change of narrative viewpoint, nothing of greater substance). From the simple argument that if physical reality comprises a universal system, we should expect the same limitations and boundaries to hold for it as hold for other universal systems, like computers and formal axiomatic systems, we conclude that we ought to expect to find randomness in the physical world.
This is a conclusion very different from the one that is usually drawn when considering the possibility that the world might be computable -- in general, it is thought that the appearance of randomness throws a spanner in the works, since after all, randomness isn't computable (that's what makes it random)!
The key here is that one should realize that the randomness does not occur 'within' the computation, but rather, at the edge of it -- and in this sense, it is possible to at least obtain finite amounts of randomness, i.e. approximate some random number, like, for instance, a Chaitin Ω-number, to a finite precision, as Calude et al. do in this paper (NB: That an Ω-number can be computed to finite precision is of course equivalent to Chaitin's incompleteness theorem, as we saw in the last post). The conclusion that randomness implies the impossibility of computational physics is thus a rather premature one!
However, it is difficult to see what this appearance of randomness 'at the edges' might mean for physics. In order to see it in action, we first must set the stage, and for this, I'll need the concept of phase space.
Phase Space Dynamics
The classic Newtonian model system is a point mass moving through space. At any given moment in time, it has a certain location, described by three numbers, its coordinates. These represent basically how far away it is from the floor, from the left wall, and from the back wall of your room; these three references make every point in space uniquely identifiable.
However, this description alone tells us nothing about how the point mass is going to behave, i.e. where it will be at some later point in time. In order to be able to know that, you need more data; specifically, you need to know how the point mass changes its position with time -- that is, you need to know its velocity. Since there are three numbers determining its place, there also need to be three numbers determining its velocity -- one each corresponding to the change of position parallel to the floor, left wall, and back wall. It is more convenient usually to refer to the particle's (where here 'particle' just means 'Newtonian point mass', no relation with more sophisticated things known as 'elementary particles') momentum, which is its velocity times its mass, in the simplest case.
Thus, three numbers relating to the particle's position, and three numbers relating to its momentum, are enough to characterize all possible states of motion of the particle. These are the axes that span the particle's phase space. A state of the particle is a point in this phase space; the particle's evolution, being its change of state over time, then traces out a line in this phase space.
For simplicity, we can imagine the particle being constrained to moving in just one direction; since the other directions are fixed, we can safely forget about them, and the particle's state is determined simply by one number for its position, and one for its momentum along the relevant direction -- the six dimensional phase space collapses to a two dimensional phase plane. This has the great virtue of being easy to draw on a two dimensional monitor:
Fig. 1: A particle in phase space |
One way would be to ask questions like: 'Are you at position x0? Do you have momentum p0? Are you at x1?...'
It is clear that this would be a very ineffective way of measuring the particle's properties: you stand to ask many questions before you ever get an answer that gives you any information (worse, even, since there are in theory infinitely many positions for the particle to be, and infinitely momenta to have, you ought to expect having to ask infinitely many questions -- and really, who's got that kind of time these days). It would be like playing 20 questions by going through a list of all possible people: 'Are you Aaron A. Aaronson? Are you Aaron B. Aaronson?' and so on. Not a winning strategy!
It's better to phrase your questions in such a way that each of them gives you valuable information -- each of them needs to constrain the possibilities such that the possible follow-up questions to each question are limited by the answers you receive. The simplest trick is not to ask whether the particle is at a given position, but whether it is within a given range of positions (or momenta), an interval (for this, we need to assume the range of possible positions is bounded somehow, which translates to the assumption that in the experiment, the system you're experimenting on is at least present; we'll just call the maximum value 1000, for convenience). Your starting question might be: 'Are you between 0 and 500?'
Now, in each case, whatever the answer, you will learn something: either that the particle is between 0 and 500; or, that it is between 500 and 1000! Depending on this answer, you can narrow down the scope of your following question: for yes, you ask: 'Are you between 0 and 250?', and for no, you ask: 'Are you between 500 and 750?' This is called a nesting of intervals.
Now if there is some smallest interval, then this process with eventually terminate, giving you the interval (of either momentum or position) the particle is in. But why would there be? In general, the assumption is that space is continuous, and that momentum is likewise; so there exists an exact real number for each of the particle's coordinates in phase space. But real numbers have the property that between every two real numbers, there are infinitely more -- which means that you can infinitely refine your interval, and nevertheless never exactly pinpoint the particle's location! In other words, it would still take asking and answering an infinite amount of questions in order to precisely know the particle's properties.
Of course, one could be contented with just knowing them to finite accuracy -- after all, every real-world measurement has some finite error, thus, all one ever gets is really a range of positions, or respectively of momenta, for the particle to be in. But still, there is something profoundly strange to the notion that it should take an infinite amount of information to find a particle in phase space.
But what if the information needed turns out to be finite? What if, after a certain point, the system is just not able to answer any more questions? We have, after all, reason to suspect it ought to be so: if the physical world is a universal system, it is limited by incompleteness. Some things can't be known to arbitrary precision, and at the edge of these unknowables, we find the notion of randomness.
So let's now suppose that the system in question were described by a random number, so that the bit string obtained by measuring it is random in the sense of the previous post (i.e. it is incompressible and contains a maximum amount of information). It is then a direct consequence from Chaitin's incompleteness theorem that this bit string can only ever be known to finite accuracy through computable means; this is so because, as we have seen, any random (and left-c.e.) number is an Ω-number, and Ω-numbers can only be approximated to a finite degree of accuracy by computation. This poses a fundamental limit on the information that can be contained in a system; in particular, returning to the example at hand, it means that there is a fundamental limit to the localizability of a particle in phase space. This is not to be thought of as a restriction applying only to measurement, i.e. that there exists a definite phase-space location of the particle, we are just forever doomed to be ignorant about it; but rather, the definite location does not exist, nature herself, if she is a universal system as stipulated, doesn't know the location with greater accuracy. The questions regarding smaller intervals are not answerable by the system; they are undecidable, uncomputable, an intrinsic expression of the system's incompleteness.
This means concretely that neither x nor p can be known with perfect accuracy; i.e. that the particle can only be confined to a minimum area within phase space, but not further than that. Putting it into pictures:
Fig. 2: Minimum area of phase space localizability |
If this sounds familiar to some, there's a good reason: the area of the phase plane has units of [momentum · position] = [energy · time] = [Joule · second], which are, not coincidentally, the units of Planck's constant h. The simple assumption of a fundamental limit to the amount of information that is extractable from a system, which is in turn a direct consequence, via the Chaitin/Gödel limitative theorems, from the assumption that the physical world is a universal system, introduces the core quantity of quantum theory into our description.
But we can do much better than that.
Phase Space Quantum Mechanics
First, observe that the postulate that the minimum area within which a system can be localized in phase space is equal to h introduces the constraint that at minimum, the product of uncertainty in position and momentum equals h, or, expressed in an equation:
ΔxΔp ≥ h
This is Heisenberg's famous uncertainty principle, a key relation of quantum mechanics.
Now, imagine a system composed of multiple 'elementary' systems, for each of which this relation holds. It is natural to expect for this system's phase space area to be a multiple of h; this gives the condition:
∫pdx = nh,
which is the quantization condition of what's nowadays known as the old quantum theory; it's used to select the quantum-mechanically possible states of the system from the classically allowed ones -- i.e. only if the system obeys the condition above is it in a quantum mechanically allowed state. This is a historical predecessor of the full quantum theory, and many important systems, such as the hydrogen atom or classical textbook problems like the potential well and the harmonic oscillator, where it yields the important realization that energy is quantized in units of hν (ν being the frequency of, for instance, light), can already be treated rather well with it. Also, it prompted de Broglie to propose the duality between particles and waves, another famous and deep aspect of quantum mechanics.
But, one can go further even than that and recover the full formalism and phenomenology of quantum theory, using as input only the realization that there exists a minimum quantum of phase space area. To understand this, we must take a look at a mathematical technique called deformation.
Deforming Phase Space
Roughly speaking, a deformation is about what you'd expect it to be -- you take something and push, pull or press it along some axis such that in the end, you have an object clearly related to, but still different from, the original one, which, if the deformation is undone, returns to the form you started out with. Typically, in mathematics, these deformations depend on some parameter, such that if the parameter is extremized in some appropriate sense (generally, taken to zero or infinity), you get the original structure back. An example is the deformation of a circle, which yields an ellipse, and depends on a parameter known as eccentricity: the lower the eccentricity, the more 'circle-like' an ellipse becomes, such that an ellipse with eccentricity 0 is nothing but a circle.
In physics, too, deformations are not unknown: for instance, Einstein's special relativity can be seen as a deformation of Newtonian mechanics, where the deformation parameter is the speed of light c (or more exactly, the quotient of a system's speed v and c). When you formally let the speed of light tend to infinity, or only look at systems with speeds v << c, you get the Newtonian theory back.
Now, Newtonian physics has a very natural and quite beautiful formulation in phase space, known as Hamiltonian mechanics. Given the preceding considerations, an interesting question is: does there exist a deformation of Hamiltonian mechanics, whose deformation parameter is Planck's constant h?
It turns out that the answer to this question is yes -- and the deformation turns out to be quantum mechanics! More concretely, there exists an essentially unique (unique up to isomorphism) mathematical structure depending on the parameter h (or again more accurately, h/S, where S is the system's action), and that structure is phase space quantum mechanics, a version of quantum mechanics that is equivalent to the more usual Hilbert space version, but has the virtues of introducing less abstract formalism, and working in the same 'arena' as classical mechanics -- which, for instance, makes it easy to make the transition between classical and quantum mechanics explicit: it just amounts to undoing the deformation, i.e. taking the h → 0 (or S → ∞) limit, i.e. either letting the minimum phase space area tend to zero, or looking at things on a large enough scale such that it becomes negligible.
It's beyond the scope of this blog to provide a full introduction into phase space quantum mechanics, which, in my opinion, is a quite beautiful and remarkable formalism that is less well known than it perhaps ought to be; we'll just content ourselves with the following two remarks:
Now, Newtonian physics has a very natural and quite beautiful formulation in phase space, known as Hamiltonian mechanics. Given the preceding considerations, an interesting question is: does there exist a deformation of Hamiltonian mechanics, whose deformation parameter is Planck's constant h?
It turns out that the answer to this question is yes -- and the deformation turns out to be quantum mechanics! More concretely, there exists an essentially unique (unique up to isomorphism) mathematical structure depending on the parameter h (or again more accurately, h/S, where S is the system's action), and that structure is phase space quantum mechanics, a version of quantum mechanics that is equivalent to the more usual Hilbert space version, but has the virtues of introducing less abstract formalism, and working in the same 'arena' as classical mechanics -- which, for instance, makes it easy to make the transition between classical and quantum mechanics explicit: it just amounts to undoing the deformation, i.e. taking the h → 0 (or S → ∞) limit, i.e. either letting the minimum phase space area tend to zero, or looking at things on a large enough scale such that it becomes negligible.
It's beyond the scope of this blog to provide a full introduction into phase space quantum mechanics, which, in my opinion, is a quite beautiful and remarkable formalism that is less well known than it perhaps ought to be; we'll just content ourselves with the following two remarks:
- There exists a mapping, called the Wigner-Weyl transformation, which associates each quantum mechanical operator in Hilbert space with an ordinary function on phase space; these functions, respectively operators, are the observables of quantum mechanics -- they dictate what you measure, effectively
- Under the deformation, the usually commutative multiplication of functions on phase space is replaced by a noncommutative star product (commutativity means that the order in which you do multiplication doesn't matter, i.e. that a∙b = b∙a; it was Heisenberg who first realised that you needed something noncommutative to do quantum mechanics)
With the concept of phase space point no longer being meaningful, you need to generalize to the Wigner distribution, whose time evolution is given by the deformed version (the bracket, called the Moyal bracket, is related to the star product in the same way the Poisson bracket is related to ordinary phase space multiplication) of Liouville's theorem; this equation is the equivalent of the von Neumann equation in traditional Hilbert space quantum mechanics, which is in turn equivalent to the famous Schrödinger equation that describes the time evolution of quantum mechanical systems (and consequently, the Wigner density function is the Wigner-Weyl transform of the quantum mechanical density matrix). Thus, we get a formalism fully equivalent to Hilbert space quantum mechanics, just from a simple deformation.
If that all seemed a bit much, don't worry, it won't be on the test!
Maximum Information
In this post, we have seen how quantum mechanics follows from the existence of a minimum phase space area, which in turn we have concluded must exist because of the fact that no universal system can approximate an Ω-number to greater than finite precision -- because of incompleteness, in other words.However, I don't wish to claim this as a derivation of quantum mechanics in the proper sense. The argument, as such, is heuristic at best, and only intended to serve as a motivation for a direction in which to look for reasons for quantum mechanics -- a principle of quantumness, so to speak.
In the literature, an approach towards finding such a 'principle of quantumness' in a principle of maximum information is somewhat common, and has been shown to be quite fruitful. The basic idea is to limit the amount of information an observer can obtain from a system, and several approaches have managed to glean large chunks of quantum mechanics from not much more than this assumption. The interested reader might for instance want to take a look at Rovelli's Relational Quantum Mechanics, Chris Fuchs' Quantum Mechanics as Quantum Information, Zeilinger's Foundational Principle for Quantum Mechanics, or just peruse Alexei Grinbaum's review paper which compares, contrasts, and to a certain extent unifies these approaches.
The connection between 'maximum information' principles and the view I propose may seem obvious, but can be made more precise by appealing to what Chaitin called his 'heuristic principle' (since proven by Calude and Juergensen), that one cannot derive a theorem from a set of axioms if the Kolmogorov complexity of the theorem significantly exceeds that of the axioms.
The principle is intuitively obvious -- Kolmogorov complexity provides a measure of the information content of the axiom system, so obtaining a theorem with a higher Kolmogorov complexity would seem to create information 'out of thin air'; and the manipulations one performs in order to make a formal deduction, which can be cast in terms of purely symbolic operations, certainly seem information-preserving, since they work on an entirely syntactic, as opposed to semantic, level.
Thus, incompleteness really can be seen as being equivalent to a 'maximum information' principle for formal axiomatic systems -- corroborating the notion that incompleteness in axiom systems and quantumness in the real world are ultimately the same kind of thing.
Keine Kommentare:
Kommentar veröffentlichen