Samstag, 19. November 2011

The Origin of the Quantum, Part III: Deviant Logic and Exotic Probability



Classical logic is a system concerned with certain objects that can attain either of two values (usually interpreted as propositions that may be either true or false, commonly denoted 1 or 0 for short), and ways to connect them. Though its origins can be traced back in time to antiquity, and to the Stoic philosopher Chrysippus in particular, its modern form was essentially introduced by the English mathematician and philosopher George Boole (and is thus also known under the name Boolean algebra) in his 1854 book An Investigation of the Laws of Thought, and intended by him to represent a formalization of how humans carry out mental operations. In order to do so, Boole introduced certain connectives and operations, intended to capture the ways a human mind connects and operates on propositions in the process of reasoning.
An elementary operation is that of negation. As the name implies, it turns a proposition into its negative, i.e. from 'it is raining today' to 'it is not raining today'. If we write 'it is raining today' for short as p, 'it is not raining today' gets represented as ¬p, '¬' thus being the symbol of negation.
Two propositions, p and q, can be connected to form a third, composite proposition r in various ways. The most elementary and intuitive connectives are the logical and, denoted by ˄, and the logical or, denoted ˅.
These are intended to capture the intuitive notions of 'and' and 'or': a composite proposition r, formed by the 'and' (the conjunction) of two propositions p and q, i.e. r = p ˄ q, is true if both of its constituent propositions are true -- i.e. if p is true and q is true. Similarly, a composite proposition s, formed by the 'or' (the disjunction) of two propositions p and q, i.e. s = p ˅ q, is true if at least one of its constituent propositions is true, i.e. if p is true or q is true. So 'it is raining and I am getting wet' is true if it is both true that it is raining and that you are getting wet, while 'I am wearing a brown shirt or I am wearing black pants' is true if I am wearing either a brown shirt or black pants -- but also, if I am wearing both! This is a subtle distinction to the way we usually use the word 'or': typically, we understand 'or' to be used in the so-called exclusive sense, where we distinguish between two alternatives, either of which may be true, but not both; however, the logical 'or' is used in the inclusive sense, where a composite proposition is true also if both of its constituent propositions are true.

Samstag, 12. November 2011

Maxwell's Demon, Physical Information, and Hypercomputation



The second law of thermodynamics is one of the cornerstones of physics. Indeed, even among the most well-tested fundamental scientific principles, it enjoys a somewhat special status, prompting Arthur Eddington to write in his 1929 book The Nature of the Physical World rather famously:
The Law that entropy always increases—the second law of thermodynamics—holds, I think, the supreme position among the laws of nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations—then so much the worse for Maxwell's equations. If it is found to be contradicted by observation—well these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.
But what, exactly, is the second law? And what about it justifies Eddington's belief that it holds 'the supreme position among the laws of nature'?
In order to answer these questions, we need to re-examine the concept of entropy. Unfortunately, one often encounters, at least in the popular literature, quite muddled accounts of this elementary (and actually, quite simple) notion. Sometimes, one sees entropy equated with disorder; other times, a more technical route is taken, and entropy is described as a measure of some thermodynamic system's ability to do useful work. It is wholly unclear, at least at first, how one is supposed to relate to the other.
I have tackled this issue in some detail in a previous post; nevertheless, it is an important enough concept to briefly go over again.

Montag, 31. Oktober 2011

The Origin of the Quantum, Part II: Incomplete Evidence



In the previous post, we have had a first look at the connections between incompleteness, or logical independence -- roughly, the fact that for any mathematical system, there exist propositions that that system can neither prove false nor true -- and quantumness. In particular, we saw how quantum mechanics emerges if we consider a quantum system as a system only able to answer finitely many questions about its own state; i.e., as a system that contains a finite amount of information. The state of such a system can be mapped to a special, random number, an Ω-number or halting probability, which has the property that any formal system can only derive finitely many bits of its binary expansion; this is a statement of incompleteness, known as Chaitin's incompleteness theorem, equivalent to the more familiar Gödelian version.
In this post, we will exhibit this analogy between incompleteness and quantumness in a more concrete way, explicitly showcasing two remarkable results connecting both notions.
The first example is taken from the paper 'Logical Independence and Quantum Randomness' by Tomasz Paterek et al. Discussing the results obtained therein will comprise the greater part of this post.
The second example can be found in the paper 'Measurement-Based Quantum Computation and Undecidable Logic' by M. Van den Nest and H. J. Briegel; the paper is very interesting and deep, but unfortunately, somewhat more abstract, so I will content myself with just presenting the result, without attempting to explain it very much in-depth.

Dienstag, 11. Oktober 2011

The Origin Of The Quantum, Part I: An Incomplete Phase Space Picture


In the last post, we have familiarized ourselves with some basic notions of algorithmic information theory. Most notably, we have seen how randomness emerges when formal systems or computers are pushed to the edges of incompleteness and uncomputability.
In this post, we'll take a look at what happens if we apply these results to the idea that, like computers or formal systems, the physical world is just another example of a universal system -- i.e. a system in which universal computation can be implemented (at least in the limit).
First, recall the idea that information enters the description of the physical world through viewing it as a question-answering process: any physical object can be uniquely identified by the properties it has (and those it doesn't have); any two physical objects that have all the same, and only the same, properties are indistinguishable, and thus identified. We can thus imagine any object as being described by the string of bits giving the answers to the set of questions 'Does the object have property x?' for all properties x; note that absent an enumeration of all possible properties an object may have, this is a rather ill-defined set, but it'll serve as a conceptual guide.
In particular, this means that we can view any 'large' object as being composed of a certain number of 'microscopic', elementary objects, which are those systems that are completely described by the presence or absence of one single property, that may be in either of two states -- having or not having that particular property. Such a system might, for instance, be a ball that may be either red or green, or, perhaps more to the point, either red or not-red. These are the systems that can be used to represent exactly one bit of information, say red = 1, not-red = 0. Call such a system a two-level system or, for short, and putting up with a little ontological inaccuracy, simply a bit.

Sonntag, 18. September 2011

Information through the Lens of Computation: Algorithmic Information Theory



Up to now, previous discussions about information and computation have been rather informal -- I have neglected to give detailed mathematical introductions to concepts like Shannon entropy, Turing universality, codes etc. for the simple reason that for our purposes, a qualitative understanding has been fully sufficient.
However, if we want to make any quantitative statements about the notions of information and computation, it will benefit us to take another look at the subject in a slightly more formal context.
To this end, the most efficient framework seems to me to be that of algorithmic information theory, introduced and developed mainly by the mathematicians Ray Solomonoff, Andrey Kolmogorov, and Gregory Chaitin in the 1960s. According to Chaitin, AIT is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously"; as such, it will aid us in achieving a better understanding, in a quantitative way, of both subjects.
In a way, AIT is concerned with a concrete realisation of the somewhat abstract-seeming notion of information or information content within the realm of computation. To this end, its central concept is the so-called Kolmogorov complexity, which, loosely speaking, provides a measure of the information content of an object -- say, a string of binary digits -- by considering the shortest program, on a given computer, that outputs this object. This might seem at first like a rather arbitrary definition: different computers generally will have different programs, of different lengths, that generate the string, leading to different complexities for one and the same object. But it turns out that despite this ambiguity, Kolmogorov complexity is useful independent of the concrete computational implementation it is based on.

Samstag, 3. September 2011

The Universal Universe, Part IV: Computational Complexity



Let me start this post on a personal note and apologize for the unannounced break -- I was on vacation in Norway, and, due to poor planning on my part, had an exam right afterwards, which kept me from attending to the blog as I planned.
It all turned out for the best in the end, though, since during my stay in Norway, I stumbled across this great essay by Scott Aaronson (discussed on his blog here), which alerted me to something I haven't paid a lot of attention to, but should have -- the field of computational complexity, and especially its implications for philosophy. Having now had some time to digest the contents, I think there's some important issues for me to think, and write, about.
Of course, I was aware of the field of computational complexity and its basic notions prior to reading that essay, but thought of it as in some way concerning 'merely' problems of practicality -- making the classic philosopher's error (not that I'm a philosopher, but this apparently doesn't keep me from committing their errors) of being content with showing something 'in principle' possible, ignoring the issues involved in making it possible 'in practice' as mere technicality.
Aaronson, now, has taken up the challenge of breaking this unfortunate habit, and does so with great clarity, showing that often enough, thinking about merely what is possible in principle, or, in more appropriate terms, what is computable, without thinking about the difficulty involved in actually undertaking that computation, misses most of the good stuff. One particularly interesting argument is his resolution of the so-called 'waterfall problem', which essentially poses that any physical process can be interpreted as implementing any computation, making thus the proposal that sentience can come about merely through computation somewhat suspect -- apparently forcing us to conclude that if there is a computation that gives rise to sentience, every (or nearly every) physical system can be viewed as implementing that computation, and hence, giving rise to sentience.

Dienstag, 26. Juli 2011

Consciousness Explained Anyway



Today, we are going on a slight diversion from the course of this blog so far, in order for me to write down some thoughts on the nature of human consciousness that have been rattling around in my head.
In a way, this is very apropos to the overarching theme of computationalism (which I personally take to be the stance that all of reality can be explained in computable terms, a subset of physicalism) that has pervaded the posts so far (and will continue to do so), because the idea that consciousness can't be reduced to 'mere computation' is often central to supposed rebuttals.
In another way, though, consciousness is far too high-level a property to properly concern ourselves with right now; nevertheless, I wanted to write these things down, in part just to clear my head.
My thoughts on consciousness basically echo those of the American philosopher Daniel Dennett, as laid out in his seminal work Consciousness Explained. However, while what Dennett laid out should perhaps most appropriately be called a theory of mental content (called the Multiple Drafts Model), I will in this (comparatively...) short posting merely attempt to answer one question, which, however, seems to me the defining one: How does subjective experience arise from non-subjective fundamental processes (neuron firings, etc.)? How can the impression of having a point of view -- of being something, someone with a point of view -- come about?