Dienstag, 12. Juli 2011
The Universal Universe, Part II: ...but is it?
I have ended the previous post with the encouraging observation that if the universe is computable, then it should be in principle possible for human minds to understand it -- the reasoning essentially being that each universal system can emulate any other. But the question now presents itself: is the universe actually computable?
At first sight, there does not seem any necessity for it to be -- after all, computation and computational universality may be nothing but human-derived concepts, without importance for the universe as it 'really is'. However, we know that it must be at least computationally universal, as universal systems can indeed be build (you're sitting in front of one right now) -- the universe can 'emulate' universal systems, and thus, must be universal itself (here, I am using the term universal in the somewhat loose sense of 'able to perform every calculation that can be performed by a universal Turing machine, if given access to unlimited resources'). Thus, the only possibility would be that the universe might be more than universal, i.e. that the notion of computation does not suffice to exhaust its phenomenology.
And indeed, it is probably the more widespread notion at present that the universe contains entities that do not fall within the realm of the computable. The discussion is sometimes framed (a little naively, in my opinion), as the FQXi did recently in its annual essay contest, in the form of the question: "Is reality digital or analog?"
What's most commonly at stake here is the nature of spacetime -- an 'analog' continuum versus a 'digital' discretum, like a point set or a lattice.
What does this have to do with the question of computability? Well, the problem is essentially that, although there are infinitely many, there are not enough Turing machines to compute every point of a continuum -- such that if a particle moves through such a continuum, only a small fraction of the time will we be able to give its position a numerical value using computable means.
This has to do with a result we will get to again later, known as the diagonal lemma. The diagonal lemma is a mathematical result due to Georg Cantor, which can be used to show that there are 'more' real numbers than there are natural numbers, even though there are infinitely many of the latter. It's a simple enough proof to give here; all one really has to know is how to count, and even that I'm going to review briefly.
If we have two sets of things, say apples and pears, we know that there are equally many of each if we can pair them with one another without having any left, one apple to one pear. If in the end, we have unpaired apples left, there are more apples; if we have surplus pears, those are more numerous.
For finite sets, this is well in line with out ordinary notion of sizes or amounts; however, when it comes to the infinite, certain apparent oddities turn up. One is that one can use this technique to show that there are as many even numbers as there are natural numbers in total, even though the latter comprise all the even numbers, and the odd ones! To see this, we only need to pair each natural number with an even number, which we can do like so: (1, 2), (2, 4), (3, 6), (4, 8),...(n, 2n),...
Every natural number gets an even-number partner. The two sets are said to be of equal cardinality. One might say, 'well, that's actually fine: the set of even numbers is infinite, and since infinite is is the biggest thing there is, adding to it can't make it bigger; thus, adding the odd numbers must yield another infinite set'. And in a sense, that's right; however, thinking about infinity always comes with a snag, so in another sense, it's not really right at all.
That snag shows itself when contemplating the real numbers, or reals for short. If infinite is the biggest thing there is, and there are infinitely many natural numbers, then we should be able to find a natural number to pair with every real, right?
Well, let's try -- and let's even simplify things a little by confining us to reals between 0 and 1. The ordering does not matter much, so we can just randomly associate numbers: (1, 0.634522345...), (2, 0.323424129...), (3, 0.973247638...), and everything seems to go smoothly. However, now think of the following number, which we'll call d for diagonal -- it's constructed the following way: take the first digit of the first number, and lower it by one, and make it the first digit of d ('roll over' to 9 if you're already at 0); then take the second digit of the second number, and lower it by one, and make it the second digit of d; then take the third digit of the third number...
Now, the number thus constructed is obviously different from the first number, since it is different in the first digit; it is obviously different from the second number, since it is different in the second digit; it is different from the third number, since it is different in the third digit -- generally, it is different from the kth number, since it differs in the kth digit. But that means that it is different from all the numbers we have yet paired with a natural number -- it is 'left over'!
Of course, there is an easy fix for this: pair it with a natural number. There's infinitely many of 'em, so it's not like we're gonna run out! However... then, we can just play the game again, constructing a new number d' that has no natural partner. And in fact, there's no end to this -- we always can construct a new diagonal, yielding a real number without partner in the naturals.
The inescapable conclusion is then that even though there are infinitely many natural numbers, there are more real numbers. One often says the natural numbers (or any set of equal cardinality) is countable (or countably infinite), whereas the reals are uncountable (or uncountably infinite).
Now let's get back to computability. It turns out that you can associate a natural number to each Turing machine that identifies it uniquely -- i.e. there are countably many Turing machines. But this means that there are only countably many numbers that can be computed -- meaning that some real numbers are non-computable. Thus, if spacetime is continuous, which just means that there are as many points as there are real numbers, there may be matters of fact that no computer could ever tell us -- for instance, if two particles are separated in space by a distance whose magnitude, given a certain, fixed unit of length, is equal to a non-computable real, no computer could ever produce this result.
This seems like a dire result for computability: most theories of physics, including General Relativity and Quantum Mechanics/Quantum Field Theory, are build on the continuum, and, if evidence for a physical theory is considered evidence for the existence of the mathematical entities it presupposes (which is a strong, but not unanimously held position -- Hartry Field in his book Science Without Numbers has advanced the view known as (mathematical) fictionalism, that mathematical entities, in order to yield valid physical theories, may be regarded as useful fictions rather than 'real things', which can in principle be done away with), their strong experimental support would seem to crush the hopes of the computationalist.
Be that as it may, we know that these theories can't be the full picture -- in certain regimes, unfortunately experimentally inaccessible at the present moment, they become nonsensical, or contradict one another. It is thus in those areas that one might hope for a departure from continuous structures -- and indeed, promising theoretical evidence from several research programmes exist, the strongest perhaps coming from the holographic principle (discussed briefly in this previous post): it suggests that there is an upper limit to the entropy within a given part of spacetime, proportional to the area of the surface of its volume. But the usual interpretation of entropy is as a measure of microstates of a system; if it is finite, so must the number of microstates be, and the finite is always computable.
Nevertheless, it is important to spend a moment or two discussing what, exactly, a non-computable universe might mean with regards to the prospect of ever gaining a working understanding of it.
At first, it might seem that we need not be too bothered by the prospect of non-computability; indeed, there might be cause to cheer for it, for the ability to harness this non-computability in principle would enable us to build so-called hypercomputers.
Hypercomputers differ from Turing machines and other universal systems in their capacity for computation: problems unsolvable by Turing machines may be solved with the aid of a hypercomputer. Equivalently, a hypercomputer is a system whose behaviour can't be emulated by any universal system.
This looks like a very welcome thing on first brush: problems unsolvable by Turing equivalent computation suddenly become manageable. One particular such problem is known as the halting problem. Turing machines, depending on their input, either halt, and produce an output -- or never halt, but continue running forever (they may get 'stuck in an infinite loop'). The halting problem then is the question if, given a certain input, a certain Turing machine will ever halt and produce an output. No ordinary Turing machine can solve this problem (I plan to talk about the reason for this in a future post -- it is actually quite similar to Cantor's proof of the uncountability of the reals above).
However, a hypercomputer can! And while this may seem somewhat academic, there are actually a lot of nice things you can do once you can solve the halting problem -- for instance, you might wonder whether there exists a number with a certain property, but can't think of any nice way to prove that there is or isn't one. Now, you might set up a computer to successively test numbers for this property -- and sure enough, if there is one, it will eventually halt, and output that number. The problem is: how long do you wait? For any given waiting time, the computer will only have explored a finite number of possibilities, meaning that there might yet be an answer ahead; or, it will continue happily chugging along forever.
But, with the power of hypercomputation, you can just set up the program, ask the hypercomputer to determine whether or not it will halt, and presto -- if it does, there is a number having the property in question; if it doesn't, there is none. This last answer is impossible to get without the hypercomputer.
However, a new question arises: how do you know the hypercomputer is right?
For an ordinary computer, it's easy: if need be, you can always break out pen and paper, and check for yourself, at least in principle. However, all that you can accomplish this way is equivalent to what a Turing machine can accomplish -- thus, you can't check the hypercomputer's result. Of course, there are other ways of indirectly convincing yourself of the hypercomputer's accuracy -- consistency of results, possibly with other hypercomputers, knowledge of its design and operating principles, etc. Yet, these can't absolutely cement your faith in the hypercomputer.
Worse yet, there does not even seem to be a way to convince yourself that some device even is a hypercomputer -- if I hand you a black box, as long as it has more computing power -- more computational resources -- than you have access to, it can fool you into thinking it was a hypercomputer. For instance, it can fake being able to solve the halting problem, and you won't be able to catch it out: if it can run any program for significantly more steps than you can, if the program halts, and you are able to eventually discover that, then it will be able to do so, as well; if the program either doesn't halt, or halts after a time greater than you can run the program for, it can pronounce the program to be non-halting (or halting), and you won't be able to prove otherwise.
These problems extend to describing physical reality in hypercomputational (or non-computational) terms. Everything we actually think about, or at any rate, everything we ever write down, is certainly computable, as is evidenced by our ability to write it down. Everything we measure, we measure to finite accuracy -- and the finite is always computable. So, even though our current theories make reference to non-computable entities, i.e. the continuum of real numbers in particular, everything we do with them is actually perfectly computable -- as is evidenced by the success of computer models in modern theoretical physics. There is thus no need for the existence of non-computable entities that we could ever become aware of this way -- if there were, then we would be faced with a theory whose consequences we could not compute; the computation might, for instance, depend on being able to solve the halting problem. Of course, such a theory, lacking predictivity, could never be checked.
So, then, what does this mean -- is reality digital, or analog? Well, as I hinted at earlier, I do not necessarily think this is a good way to pose the question. It recalls older questions, such as 'is light made of particles or waves?', and 'is matter discrete, or continuous?', both of which have in the light of modern theories been shown to be false dichotomies.
And indeed, information-theoretic considerations seem to point a way towards unmasking the digital/analog distinction as similarly apparent: a bandlimited signal -- a signal which contains no frequencies above a certain threshold -- can be exactly reconstructed from a sampling at finite intervals (where the sampling interval is anti-proportional to maximum frequency present in the spectrum), i.e. it suffices to know the signal at a discrete set of points to interpolate the signal over the full continuum. This is the essential content of the Nyquist-Shannon sampling theorem. Both the continuous and the discretized, sampled signal thus contain the same information (which is a nice factoid to know if one is confronted with an audiophile harping on about the supposed superiority of analog signals over digital ones).
This has a bit of a catch in the need for bandlimitation -- physically, this equates to a so-called ultraviolet (UV) cutoff: basically, a highest possible energy, or frequency. Whether or not there is such a thing is unknown -- though basic old General Relativity seems to strongly hint at it: concentrate too much energy in too small a space, and you create a microscopic black hole, making it impossible to probe spacetime beyond a certain scale.
So, in this sense, whether the universe is digital or analog is not the question; more pertinently, we should think about whether it is computable or not -- and here, I think, most lines of evidence point towards a computable universe.
Abonnieren
Kommentare zum Post (Atom)
Keine Kommentare:
Kommentar veröffentlichen