So far on this blog, I have argued that quantum mechanics should be most aptly seen as a generalization of probability theory, necessary to account for complementary propositions (propositions which can't jointly be known exactly). Quantum mechanics can then be seen to emerge either as a generalization (more accurately, a deformation) of statistical mechanics on phase space, or, more abstractly (but cleaner in a conceptual sense) as deriving from quantum logic in the same way classical probability derives from classical, i.e. Boolean, logic.

Using this picture, we've had a look at how it helps explain two of quantum mechanics' most prominent, and at the same time, most mysterious consequences -- the phenomena of interference and entanglement, both of which are often thought to lie at the heart of quantum mechanics.

In this post, I want to have a look at the interpretation of quantum mechanics, and how the previously developed picture helps to make sense of the theory. But first, we need to take a look at what, exactly, an interpretation of quantum mechanics is supposed to accomplish -- and whether we in fact need one (because if we find that we don't, I could save myself a lot of writing).

Fundamentally, this is a question of what one expects of a physical theory, i.e. what one thinks a physical theory is, and what it supposedly provides. Broadly, one can identify two strands of answers in regard to this question: one is

*realist*, basically the conception that a theory tells us something about things that are out there in the world, that really exist, in some sense; the other is*anti-realist*or perhaps*non-realist*, a position which maintains that either there is no such thing as an 'out there', or that its existence is wholly immaterial to physical theory -- science mainly concerns itself with what we can say about nature, not with how nature actually is.This latter position is also known as

*instrumentalism*, and it is when it comes to quantum mechanics most closely associated with the figure of Niels Bohr. Basically, it amounts to the position that physical theories in general, and quantum mechanics especially, should be treated as a kind of black box, into which one can input a precise statement of a physical problem, such as an experimental setup, and which then outputs the expectation for the experiment's outcome. This is a consistent and in principle adequate point of view that one always can take recourse to; conceptual problems raised by any given theory may essentially be treated as of little consequence, as the theory is only an artifice, a construction to relate initial conditions to observable outcomes, where no ontological weight is put on the in-between machinery.Nevertheless, if one expects

*explanations*, as opposed to mere*predictions*, from physical theories, an answer not only to what happens, but also to how it happens, and perhaps even to why, then this point of view strikes me as deeply inadequate. In particular, the correctness of any given theory becomes an article of faith alone: even though your black box has output the right predictions a thousand times in a row, one does not have license to infer that it will do so again the one-thousand-and-first time, whereas with a theory that explains what happens through explaining how it happens, one can stake one's faith at least in the presumption that if the mechanism is correct, i.e. the answer to the how is in some way a faithful representation of how things actually happen, then the prediction should be expected to be correct any given time. There is also a more aesthetic flaw -- on an instrumentalist reading, correct physical theories do not alleviate our ignorance about the world, they in fact only compound it: not only do we not know how nature works, we also don't know why a certain theory appears to describe the outcome of experiments as well as it does.Thus, it strikes me as far more desirable to find an account for a physical theory that explains its correctness in terms of its relation to nature, i.e. to what 'actually' happens. Nevertheless, I find myself compelled to use scare quotes whenever talking about things 'actually' happening or being 'really there'. The reason for this, as I have previously argued, is that I don't necessarily believe that there is a unique matter of fact to

*what*'actually' happens: there may be different accounts, different models, or gauges, if you will, that lead to the same observable reality (to reiterate an example I used before, consider the way the knight moves in a game of chess: there are many different accounts one can give of the rule -- one straight, one diagonally; in the form of an L; two straight, one straight in an orthogonal direction, etc. --, which all lead to the same game of chess). Ultimately, this is because of computational equivalence: different computations yield the same output, indeed any universal computation device can be used to emulate, or can be seen as, any other, and thus, computes all and only the same things.Be that as it may, this puts me in the position of trying to understand what quantum mechanics is trying to tell us about the world -- which is to interpret it.

**The Schizophrenic Quantum Picture**

The main problem in the interpretation of quantum mechanics is known as the

*measurement problem*. Broadly, it can be formulated as the problem of how a theory with all the apparent fuzziness and vagaries of quantum mechanics can account for our experience of a determinate, definite reality, or even for the specific outcomes every measurement generates.More specifically, it can be seen as the tension between two ways quantum systems seem to evolve: one is the unitary evolution dictated by the Schrödinger equation; the other is the sudden 'collapse' of this description to settle on a definite answer to a question posed in the form of measurement.

To make this more explicit, recall the notion of superposition: a quantum system may be not merely in either of two dichotomic states, but also in an arbitrary linear combination of them. So if |0⟩ and |1⟩ denote two possible states for a qubit, then its general state can be written as |ψ⟩ = ɑ|0⟩ + β|1⟩, where ɑ and β are in general complex coefficients such that |ɑ||ɑ| and |β|

^{2}+ |β|^{2}= 1, and where^{2}

^{2}give the probability that experiment will find the qubit in the state |0⟩ or |1⟩, respectively (see also the entry on interference). (In the following, I will not bother with carrying these factors through; sticklers may freely introduce them as needed, or multiply everything with 1/√2 to ensure equal probability and normalization.)The problem with a state like |ψ⟩ is that whenever we undertake a measurement, what we actually find is either |0⟩ or |1⟩, and that within the usual quantum dynamics, there is no process that can account for this.

To see this, consider that there is good reason to believe that quantum mechanics ought to apply to all physical systems equally. The simplest argument is that ultimately, everything is made out of particles and quanta, and there is no rule so that if a system reaches a critical size, it fails to be described by quantum mechanics (though there are attempts to introduce just such a rule to account for the problems). So whatever we use to measure a quantum system should itself be describable by quantum mechanics -- and thus, the interaction between the measuring apparatus and the system ought to be no different from any other quantum interaction. So if we write the state of the measuring apparatus pre-measurement as |ready⟩, and we make a measurement on a system in the (non-superposed) state |0⟩, then we ought to expect, if the apparatus is faithful, i.e. always indicates the right state after the measurement, an evolution like the following:

|ready⟩|0⟩ → |"0"⟩|0⟩,

where |"0"⟩ just means the state of the apparatus that indicates that it has measured the qubit to be in the state |0⟩, say by having a certain light lit, or a pointer in a certain position. Similarly, if the qubit's state is in fact |1⟩, then the evolution should be:

|ready⟩|1⟩ → |"1"⟩|1⟩,

where again |"1"⟩ means that the measurement apparatus is in a certain state that indicates the outcome of its measurement was to find the qubit in the state |1⟩. So far, this is all well and good.

There is a notion here that I should introduce, known as the

*eigenvalue-eigenstate link*. This means nothing else than that a system can only be said to have a certain property if it is in an eigenstate of having that property; thus, for a qubit to have a certain value, it needs to be in an eigenstate of having that value, i.e. either |1⟩ or |0⟩ for a value of 1 or 0 respectively. Thus, if it is in a superposed state, it does not have any definite value.But now, consider the qubit to be in the superposed state |ψ⟩ = |0⟩ + |1⟩, where as mentioned above I have neglected to normalize the state. Now, knowing that the quantum dynamics is linear, the evolution for the total state is given by the combination of the evolutions of the two components as described above. Thus, we have:

|ready⟩(|1⟩ + |0⟩) → |ready⟩|1⟩ + |ready⟩|0⟩ → |"1"⟩|1⟩ + |"0"⟩|0⟩.

But notice what's now happened: if |"1"⟩ indicates a definite state of the measurement apparatus, and likewise does |"0"⟩, then the above state is one in which the apparatus is in neither of those states -- it fails to have a definite state at all, and is in a superposition of being in the states indicating having measured the qubit as |1⟩ and having measured it as |0⟩, corresponding to the superposition the qubit itself is in.

However, that's very different from our experience with actual measurements -- where no matter what state the qubit is in, we always get a definite outcome, with the measurement apparatus exclusively ending up in the state of either |"1"⟩ or |"0"⟩. Plainly, the linear quantum dynamics does not account for this. What to do?

The first, and perhaps most obvious, attempt at fixing this situation was to introduce, besides the usual linear dynamics, another, second dynamical process, the so-called

*collapse dynamics*. According to this idea, the description above, upon measurement, probabilistically 'collapses' to one of its components, with a probability given by the usual squared-modulus rule. That is, somehow nature picks out one of the components of the superposition as the 'real' one, and discards the other(s).This process is sufficient to explain the above conundrum -- the measurement apparatus, along with the qubit, always ends up in a probabilistically determined, definite state. But it has its own severe shortcomings.

First of all, there is no hard-and-fast rule that determines which interactions, exactly, are supposed to count as 'measurements', and which aren't -- so that there is a certain degree of arbitrariness regarding which dynamical rule to apply in what case. This leads to situations of a somewhat paradoxical character: for instance, if the measurement apparatus and the system it measures are both quantum systems as stipulated, I should be able to describe the evolution of the combined system using the ordinary linear quantum evolution, as long as I don't execute any measurement on this system; but this seems flatly inconsistent with the need of applying the collapse dynamics within the system as the measurement apparatus measures the object system.

This leads to, for instance, the famous thought experiment known as

*Schrödinger's cat*: a cat is, probably in gross violations of animal rights, put in a box, along with a devious mechanism consisting of a quantity of radioactive material, a detector, and a vial of poison such that, if an atom decays within the radioactive substance within some fixed time span, say an hour, the detector triggers a mechanism that breaks the vial and kills the cat.As the radioactive substance, according to quantum mechanics, evolves to a superposition of |decayed⟩ + |not decayed⟩, the detector must accordingly evolve to |triggered⟩ + |not triggered⟩, the vial to |broken⟩ + |whole⟩, and finally, the cat ends up in a state of |alive⟩ + |dead⟩. But what is one to make of such a state?

The argument is often given that the cat, the detector, or 'the environment' constitutes an observer executing a measurement, leading to the collapse of the wave function; but this is in fact not sufficient, since I, standing outside the system, careful not to accidentally measure it, should be able to apply the usual linear dynamics to account for the system as a whole. (Of course, in reality, I would almost immediately end up 'accidentally measuring' it, as the macroscopic nature of the system means it will develop correlations with the environment, and in turn, me, extremely rapidly.)

Another problem is that it is hard to make such a formulation consistent with the special theory of relativity, which allows actions to only travel at a maximum speed of

*c*, the speed of light in vacuum. The conflict exists because, for instance, when one performs a position measurement on an electron, the wave function goes to zero instantaneously at every point in space, except that at which the electron was indeed found.However, a more troubling problem for me, philosophically, is the indeterminacy the collapse introduces into the description, though I seem to be somewhat alone in my worry there. Basically, the linear dynamics of the usual quantum evolution are perfectly deterministic -- which especially entails that there exists sufficient reason for any state of the system, given by the prior state of the system and whatever it interacts with. But in the collapse, this

*principle of sufficient reason*(due to our old friend Leibniz) fails: there is nothing that determines which state a superposition collapses to; the collapse dynamics thus introduces indeterminism into the description. This is deeply troubling to me, because it implies that some things just can happen without reason; there is no use in further questions. There is no more 'why', no matter of fact regarding why the wave function collapsed to this particular state rather than another. It just happened that way.This seems problematic to me for two reasons. First, if there is no answer to

*why*something happens a certain way, I can't see a way*how*it could possibly happen -- there can't be any mechanism according to which it happens, as such a mechanism would ultimately determine the outcome, would give an answer to why by answering how. There must be some decision in some sense, for one possibility over another -- as otherwise the issue would remain undecided --, and yet, there can't be any process by which that decision is reached.Second, it just seems to defeat the purpose of the whole scientific endeavor -- we've reached a point beyond which there is no more answer other than 'it just happens that way'; but then, we might as well never have started to go down that path, and just look at every phenomenon and explain it by 'it just happens that way'. It just happens that way that planets orbit the sun on ellipses; it just happens that way that like charges repel; it just happens that way that quantum particles can produce interference effects. If that is an acceptable answer anywhere, it should be an acceptable answer everywhere.

So to me, the collapse can't be the right answer to the measurement problem -- even if it can be made to work consistently, the price to pay just seems too high.

**The Universal Wave Function**

Another problem with the collapse (or rather, the extension of an already mentioned one), to me, seems to point the way to a better resolution. If quantum mechanics is a theory that truly applies to all physical systems, then it should also apply to the universe as a whole. But if only measurement causes the collapse of the wave function, then, since nothing measures the universe -- the universe being all there is --, the wave function of the universe can never collapse, but is described exclusively by the linear 'dynamics' (though the question of what, exactly, 'dynamics' might mean when considering the universe as a whole is not easily answered). But then we have the same situation as we had with me, as an outside observer, describing the 'cat in a box'-system using the linear dynamics, while an 'inside' observer, such as the cat or the detector, might want to use the collapse dynamics in order to describe his apparently determinate experience.

One should note that there is an in principle measurable difference between a system in which the collapse has already occurred, and a system still in superposition, so that I could in principle undertake a measurement that tells me whether the cat-in-the-box system is in a superposition (and thus, contains a cat for which there is no definite matter of fact of whether it is alive) or not -- so it's not the case that both descriptions give the same answer; in fact, they are inconsistent.

But if it is then right that the universal wave function never collapses, we are led to consider a point of view in which no collapse ever occurs. This is the position of Hugh Everett III, and at first, it must seem like utter nonsense, as it appears to manifestly fail the requirement of providing an explanation for the appearance of a determinate world out of the quantum description -- because whatever is in superposition, on this account must stay in superposition, and we should thus generally fail to have any determinate experiences at all.

Nevertheless, in his 1957 doctoral dissertation, Everett proposes to do exactly that: derive the appearance of a collapse from the linear dynamics of quantum mechanics. The problem is: it is never made exactly clear how this is supposed to work. Some claim that the different components of the universal wave function should be understood to be distinct universes or worlds, which split apart from each other every time a supposed 'collapse' happens; others think the split occurs only on the level of the 'minds' of an observer; and yet others just flatly deny the existence of any objective facts, arguing that any fact can only be relative -- I measure spin up relative to the particle being spin up; the cat is dead relative to the atom being decayed. I will not enter deeply into the field of Everett exegetics here; rather, I will just mention some general problems faced by all Everettian interpretations, and then focus on a specific approach that I find most interesting.

In looking for an apparent collapse, a couple of features stand out: the first is the appearance of a definite experience, which brings with it the continuity of said experience (a wave function, once collapsed, will yield the same results on repeated measurements), and the intersubjective agreement on this experience (if I measure the particle to be spin up, so will you); the second, somewhat more subtle, is the production of entropy. This is because the collapse is a non-information preserving process: once the wave function has collapsed to a certain state, that state does not contain enough information to reconstruct the previous state -- many different states can collapse to one and the same final state. By contrast, the linear dynamics is completely information preserving (what physicists call 'unitary'), and thus, in particular, deterministic and reversible.

**Ways To Slice The Quantum Cake**

The intention of Everett, arguably, was to show that while objectively, no wave function collapse ever occurs, subjectively, things may well appear as if it did. In particular, if we model an observer as a quantum system who looks at the measurement apparatus (is in the state '|looking⟩'), from our above considerations, the evolution would be the following, if a superposed system is measured:

|looking⟩|ready⟩(|1⟩ + |0⟩) → |1!⟩|"1"⟩|1⟩ + |0!⟩|"0"⟩|0⟩

The observer will thus evolve to a state with both components of |1!⟩ and |0!⟩, where for instance |1!⟩ means 'sees the outcome of the measurement to be 1'. He thus would determinately believe to have seen either state, and subjectively, it would appear to him as if a collapse had occurred (since it appears to him that way if he ends up in the state |1!⟩, as well as if he ends up in the state |0!⟩, by the linearity of the dynamics, it must appear to him that way in any superposition of these states).

So let us at least provisionally grant Everett that he indeed accomplishes this. An urgent question remains: why does the observer see the world he does? The above decomposition of the quantum state is not unique; one can write it in a different basis, which may entail a very different picture of the world.

In general, an arbitrary quantum state |ψ⟩ can be written as a linear combination of basis states: |ψ⟩ = Σ|ψ⟩ = |+⟩ -- manifestly not a superposed state!

_{i}c_{i}|ψ_{i}⟩, where the c_{i }are complex coefficients. Thus a qubit state can, as we have already done above, be written as |ψ⟩ = 1/√2|1⟩ + 1/√2|0⟩, where I have explicitly reinstated the coefficients, 1/√2 in both cases. However, I can introduce a different basis, |+⟩ = 1/√2|1⟩ + 1/√2|0⟩ and |-⟩ = 1/√2|1⟩ - 1/√2|0⟩, from which I can just as well construct every possible qubit state. And the superposed qubit state from before, written in the new basis, can now simply be expressed asSo it seems that, again, we can tell two equally valid, but apparently contradictory stories. In one, the qubit, and hence, the measurement apparatus and the observer is superposed -- in the many-worlds picture, there has been a split into two 'copies' of each, differing with respect to 'seeing 1' or 'seeing 0' as measurement result --, while in the other, there is no superposition, and there's a unique observer in a definite state, so no split has taken place.

Of course, in a measurement, it is ultimately the measuring apparatus that defines the basis (for instance, through its orientation in space), and since we're (as observers) the ultimate buck-stops-here measuring devices, that means us; so if we don't ask why we are the way we are, we can postulate a basis defined through us that solves the problem (in the literature known as the 'preferred basis'-problem). In this sense, it would be our point of view that determines the basis, and thus, the way we see the world.

But a better answer is possible. In order to understand this, we must first realize that ultimately, every realistic quantum system is open -- i.e. there is always interaction with an environment not taken to be part of the experimental setup. This environmental interaction introduces decoherence, the (apparent) loss of quantumness: decoherent states can no longer interfere, and thus, behave like systems governed by classical probability theory. Effectively, the interaction with a large system, such as the macroscopic environment (which may include measuring devices, cats, humans...) greatly increases the total number of states available to the total system; but the capacity of two states to interfere is described by their overlap in the state space, and with the increased number of states, that overlap will tend to very small values very quickly.

However, decoherence doesn't treat all states equally. Some states very quickly evolve into mixtures of one another -- a sentient being in such a world would not have time to perceive any given state of the world; there would be no basis for perception, or cognition, in such a reality, such a basis. But certain kinds of states, so-called 'pointer states' (because they correspond to measurement devices, whose pointers indicate a certain outcome), are more robust to such environmental interactions. These states can then be used to construct a preferred basis, in which a classical reality emerges -- objects are well-localized, interference effects (almost) vanish, etc. This process has been given the nickname 'einselection' (from

Here, the quantum cake slices itself, so to speak -- a point of view, and with it, a way to view the world, emerge jointly and dynamically. In some sense, the observer and the observed determine one another.

**e**nvironment**in**duced super**selection**) by Wojciech Zurek.Here, the quantum cake slices itself, so to speak -- a point of view, and with it, a way to view the world, emerge jointly and dynamically. In some sense, the observer and the observed determine one another.

Decoherence is then a mechanism that may lead to the appearance of wave function collapse in Everettian interpretations, by essentially removing different branches from one another through precluding their mutual interference. And indeed it can generate the entropy production we have surmised is necessary to give the appearance of a collapse: the information is dissipated into the environment; the loss of coherence is an irreversible process, though only effectively so -- a being with perfect knowledge of and absolute control over all degrees of freedom of both the system and the environment could reconstruct the original state from the final one. However, decoherence only accounts for the emergence of definite experiences within a framework such as the many-worlds interpretation -- while it causes the quantumness of the system to 'leak' into the environment, the global wave function is still in superposition. Thus, contrary to what is sometimes claimed, it does not on its own solve the measurement problem.

**Everything's (Im)probable**

Another problem that's facing any Everett-like interpretation is the so-called problem of probability. In a nutshell, probability, as usually understood, is a measure of how much we should expect some event to occur, to the exclusion of other, incompatible events -- this understanding of probability then only makes sense if one thing happens, rather than another. But without a collapse, in a measurement, all possible alternatives do, in fact, occur, as terms in the global superposition. What can we then mean by the probability of getting a certain outcome?

Or, for another take on the problem, Everettian quantum mechanics is a deterministic theory; thus, the only way for probability to arise is through ignorance. However, one can in principle know the complete state of a quantum system -- by simply preparing it in the appropriate state -- and nevertheless be only able to predict the outcome of certain experiments in a probabilistic way. But if we've got total knowledge, and the theory is deterministic, we should be able to give exact predictions!

One response to this problem has been given by Lev Vaidman. He considers a setup in which the experimenter is given a sleeping pill before the experiment; depending on the outcome, he will then, in his sleep, be moved to one of two different rooms. Upon waking up, but before he opens his eyes, he will be asked: 'In which room are you?'

Clearly, he can't definitively answer this question -- all he can do is to calculate the probability of being in either room. Thus, the observer is in fact ignorant about the state of the world, and the interpretation of probability as arising due to ignorance is restored.

However, to precisely quantify the probability, one still has to postulate the usual Born rule, perhaps bolstered by an interpretation for the probabilities in which they give the 'weight' or 'measure of existence' of distinct branches, or worlds. Other approaches, most notable the one by Deutsch, expanded upon by Wallace, attempt to even derive the specific form of the Born rule from the linear dynamics -- in particular, they adopt a decision-theoretic approach, showing that expecting future events according to the probabilities given by the Born rule is the most rational approach.

This has a certain subjective character, and thus, may worry some who think that physics should be concerned with objective truths about the world; but I think to the contrary, it's a step in the right direction, that however does not go quite far enough. The reason for this is, while a physical theory may well pertain to objective reality, what it ultimately must explain is our experience within that reality -- which is necessarily subjective. I have previously pointed to the example of a rainbow to illustrate this: there is no actual 'thing' corresponding to the rainbow in the outside world; it is entirely a product of how we perceive the world, and thus, in particular, is different for different observers. Nevertheless, a theory that doesn't explain rainbows would be incomplete.

**Taking the Inside View**

Thus, I believe the only way to fully understand quantum mechanics is to view it from the inside. A good starting point -- since our aim is still to deduce the apparent collapse of a superposition from the linear dynamics -- would be to investigate what a superposition looks like, if viewed from inside. What is it like to be superposed? How does it feel?

These may not be the questions science usually asks, but I believe they are necessary; ultimately, if science is to explain our experience, it must answer this kind of questions, since the way things feel, what things are like to us, are precisely what constitutes our experience.

To be able to make progress on this issue, however, we need a model for how we get to know our own state (of mind). How do we know how something feels to us? As I have previously argued, the most straightforward model for such introspection is just

*asking questions of yourself*.So, let us go back to our observer, observing a qubit (in order to avoid an unnecessary proliferation of terms, I will suppress the state of the measurement apparatus, and pretend the observer could somehow observe the qubit 'directly'). Let's first say the qubit is in the definite state |1⟩. The observer looks at the qubit, discovers its state, and then asks himself whether or not he got any definite result.

|Definite?⟩|looking⟩|1⟩ → |Definite?⟩|1!⟩|1⟩ → |Yes!⟩|1!⟩|1⟩

The observer is in the state |1!⟩ of having observed 1, and correctly concludes that he is in a definite state. The same works for the state |0⟩ of the qubit:

|Definite?⟩|looking⟩|0⟩ → |Definite?⟩|0!⟩|0⟩ → |Yes!⟩|0!⟩|0⟩

Now let's look at the case of a superposed qubit. The first step works just as before:

|Definite?⟩|looking⟩(|1⟩ + |0⟩) → |Definite?⟩(|1!⟩|1⟩ + |0!⟩|0⟩)

The observer enters into a superposition of observing 1 and observing 0. But what if he now asks himself: 'Have I observed a definite value of the qubit?', or equivalently: 'Am I in a definite state of observing a value of the qubit?' Because of the linearity of the dynamics, the following happens:

|Definite?⟩(|1!⟩|1⟩ + |0!⟩|0⟩) → |Yes!⟩|1!⟩|1⟩ + |Yes!⟩|0!⟩|0⟩

Even though the observer is 'in fact' in a superposed state, if he asks himself if he has observed a definite outcome, he will conclude that yes, he has -- he is in an eigenstate of experiencing a definite result, so to speak.

Now, the usual interpretation of this is that he 'mistakenly' believes himself to be in a definite state, since actually, he isn't. But it seems to me that this is a lot like mistakenly believing that one has a migraine -- it is just indistinguishable from the real thing, since in our minds, the subjective beliefs are the only real things we have (in particular, the apparent migraine would hurt just as much). So I would prefer to interpret this as leading to the emergence of the appearance of a definite experience (and thus, to a definite experience): even though 'underneath' the level of our access, everything is a chaotic muddle of superpositions, at a higher level, a few islands of definiteness stand out -- such as the invariable belief of experiencing a definite outcome. That ultimately, things are not really that way does not play a greater role than that ultimately, there is no rainbow out there.

Thus, being in a superposition feels exactly like being in a definite state; subjectively, i.e. with regards to our experience, there is nothing to tell them apart. If then the impression of being in a definite state is one of the hallmarks of an apparent collapse, the usual linear dynamics, viewed from the inside, produces exactly this.

However, this might seem as just a parlor trick at first brush. Surely, the mere impression of being in a definite state can't lead to the richness of experience, of

And yet, more or less this is actually what I wish to argue for. First, let us tease out some more consequences of this crazy idea (which David Albert, who introduced it in his book

One particular consequence of the occurrence of a collapse is that if I repeat the same measurement, I will with certainty get the same result. If we believe an actual collapse has occurred, this is easily explained: the system now actually is in the state it collapsed to, and thus, a repeated measurement simply re-detects that state. When a collapse dynamics is absent, though, this agreement requires explanation. In many-worlds theories, this explanation is provided by the stipulation that the observer is now in a certain world, or branch, associated with a specific measurement outcome, and thus, a specific state of the system; so again, a repeated measurement only confirms this fact.

However, in the bare theory, no collapse happens, and no worlds are split -- all we have is the linear dynamics, and consequently, physical systems will rarely be in an eigenstate of having a particular property. So at first, it seems that if the first measurement did not reveal a definite property of the system, the second measurement has no hope of repeating the result. But it is in fact easy to see, that if the observer asks themselves whether or not they got the same measurement result as before, the answer will unambiguously be 'yes': after two measurements on the same system, which started out in a superposition, the general state will be |1!⟩|1!⟩|1⟩ + |0!⟩|0!⟩|0⟩; so in fact, any observation corresponding to the question 'Are both measurement outcomes identical?' will return the answer that they indeed are -- however, there will in general not be a fact of the matter regarding

This argument can easily be extended to cover more complicated cases -- say, if the observer first measures a system in the (definite) state |1⟩, then another system in the state |0⟩, and finally, a system in the superposition |0⟩ + |1⟩, and is then asked (or asks himself) whether his measurement result agrees with either measurement undertaken before, he will claim that this is indeed the case -- i.e. he will report (and believe) that the measurement result he got in the third case will be equal to either one of |1⟩ or |0⟩. Thus, there is no subjective distinction between measurements carried out on systems in definite states versus measurements carried out on systems in superposition -- their results will seem just as 'definite' in both cases, however, in the latter case, there won't be any actual matter of fact regarding the measurement outcome. But the subjective appraisal of the outcomes of the three measurements -- either two times 1, once 0, or the other way around -- will thus agree with what is expected in quantum mechanics with the collapse dynamics.

More generally, one can show that for infinitely many measurements, any observer will tend towards being in an eigenstate of believing to have made measurements with statistics equal to those given by ordinary quantum mechanics (see for example John Barrett's

Another critical question is, given two observers, whether they will agree on the measurements results they have obtained. However, that this must be so is in fact shown by the same argumentation as before, where now the two measurements are not to be interpreted as repetitions by a single observer, but rather, as distinct observations, undertaken by different experimentators. And once again, it is the case that, while there is no definite matter of fact regarding what the measurement outcomes were, nevertheless both observers will agree that their measurement records coincide.

The bare theory thus explains three things with regards to our experience: its

One thing that it crucially does not seem to explain is the fact that we don't merely have

Failure to explain this basic fact of our most immediate experience seems quite outrageous. But nevertheless, consider how things would appear if they only appeared to us as if our experience had a definite content: I would raise these very same complaints, as certainly, I would be convinced of the definite content of my experience of the world! In other words, if, in an experiment, I would get the result

This is certainly a very strange view, but, to me at least, not without its charm. And later on, for those entirely too uncomfortable with this sort of picture, I will give an argument to somewhat ameliorate the consequences of this idea. But for now, let's talk about some of the problems the bare theory, despite its astounding successes, faces.

There are two common factors in almost all accounts of the bare theory that I am familiar with: 1) excitement about its 'amazingly cool' properties, and 2) its utter rejection as a resolution of the problems of quantum theory. There are several objections that are usually raised against the theory; I will only consider those two that I believe are most severe, for a good discussion of the rest, see again the already mentioned book by Barrett.

The first one is the accusation of

This, to me, seems like an utter non-problem: while it is true that our measurements have no definite outcome, that fact itself, and how it comes to be that we nevertheless have a definite belief in their outcomes, may be taken as an empirical datum; and this datum is completely explained by the bare theory. The data that leads us to postulate quantum mechanics and the bare theory then is not the data created by measurements, but the data gained from our definite beliefs about these measurements -- to wit, that they have definite outcomes.

The second objection strikes me as more serious: if the bare theory is true, then typically, one would not expect the world to be in a state in which any given observer is conscious and ready to undertake some certain measurement. Rather, the typical state of the world would consist of an enormous superposition of many possible states for any observer, where he may either be asleep, or distracted when he intended to read the measurement result, or be home sick, or maybe even not exist at all.

Similarly, any given experiment does not have a neat, clean outcome as we have previously supposed, but typically, between the experiment yielding (or not) any certain outcome, there is also the possibility that the experiment may fail to work correctly, or blow up, or that a meteor strikes the lab, obliterating both the experiment and the poor experimentalist conducting it -- such that, after any given interaction, the observer is not merely in a superposition of having gotten one result or another, but also of not having gotten any result at all, or even of having died in the process. In which case, he can hardly 'ask himself' afterwards whether or not he has gotten a definite result -- the question does not make any sense if asked of a pile of meat scraps.

So, concretely, if the observer is after the experiment in a state of |0!⟩ + |1!⟩ + |Blown to bits!⟩, then acts on that with |Definite?⟩, we get the following evolution:

and consequently, the observer would fail to be in an eigenstate of 'believing to have made a definite observation', and, by the eigenvalue-eigenstate link, thus not have this belief. So, the bare theory apparently does not account for definite beliefs after all!

At first sight, this objection seems quite damning -- all the bare theory's 'amazingly cool' properties go out of the window, once one starts considering even slightly more realistic cases.

However, I believe that this argument is the relapse into a laboriously exorcised notion: that of the special nature of the observer in quantum mechanics, where observer here means 'human' or even 'conscious human'. In the above, we have assumed that it makes no sense to ask of the environment whether it is in a definite state, or has gotten a definite result. But ultimately, the environment -- broadly defined as anything that has a chance to enter in the above superposition -- is simply an observer, too. Or, the other way around, a human, conscious observer's belief is ultimately just a physical thing, a certain configuration of a physical system, i.e. the brain, as well -- just that some of these configurations correspond to states in which a certain person has a certain belief does not change anything about that.

So, after any given interaction, there exists a certain system -- a 'meta-observer' -- of which one can 'ask' the question whether it is in a definite state; and this whole system will then 'answer' in the affirmative. Only a subset of this system can meaningfully be considered as identifiable with the original experimenter; but to this subset, the 'yes' will mean that it is in a definite belief-state of having performed a measurement, and gotten a certain result. Only a part of the superposition of the 'meta-observer' can be regarded as having beliefs; but to that part, those beliefs appear definite.

In a certain sense, this seems to invite some of the many worlds back in through the backdoor -- and one could view it like that. For instance, both the observer experiencing his apparently definite measurement result and his being blown to bits would seem to have to be regarded as equally real, and certainly, one has difficulty imagining both being real in the same world. However, the number of such worlds is greatly reduced: rather than there being multiple 'copies' of the observer, experiencing each possible outcome in separate worlds, there is only one observer, in one world. Also, the number of 'worlds' is no absolute quantity, but depends on the resolution with which you view the system: just as there is only one observer, there is only one laboratory containing the observer, but on this level, several of the 'worlds' on the observer level -- those in which, for instance, the observer had a heart attack, but the laboratory as a whole was not damaged -- are now unified. The worlds depend on what is taken to be constant across them -- the existence of the observer, or the existence of the laboratory. But ultimately, at the level of the universe, there is only one single world; so it is questionable how justified talking about 'different worlds' is in this case.

There is one more interesting aspect to this idea of 'constancy across branches'. This involves the stability of meaningful information -- where 'meaningful' here just means being of a certain form for a certain reason. A key is of a certain form, because only this form opens the lock it is made to open. The key does something with the lock, and thus, it has meaning for the lock, and that meaning lies in the information about its form (a description of this form, sufficiently precise, would enable its receiver to construct an equivalent key).

Such meaningful information tends to be the same across many branches, whereas random sequences typically vary strongly. In his book

If thus any species in our 'branch' carried a coding DNA sequence that happens to be identical to a non-coding one, in other branches, the coding sequence typically will be the same -- being necessary for the species' presence in the first place -- while the non-coding one may vary wildly.

Of the features of an apparent collapse that the bare theory provides, so far we have not accounted for the entropy a measurement with definite outcome generates. Since, on the bare theory, measurements do not in fact have definite outcomes, it might seem that it can't possibly reproduce this aspect. But one can amend the theory easily to account for this.

In order to do that, recall the argument I have introduced in the previous post: a system, consisting of two entangled sub-systems, may be in a pure state, and consequently, have zero entropy. However, each of the sub-systems regarded on its own has a nonzero entropy, because in regarding only this system, one effectively discards the information contained in the correlations between the systems; and of course, hidden information always means entropy.

In fact, the amount of entropy is a measure for the amount of entanglement between the two sub-systems: the more entangled both are, the higher the entropy of each.

But now consider that measurement, i.e. the acquiring of information about a system (the object) by another system (the observer), is a physical process -- both systems must interact in order for it to take place. And in this interaction, entanglement is created -- indeed, entanglement can be viewed as the information about the total system not contained in either sub-system.

A minor digression. There is a certain controversy regarding whether the wave function, i.e. the mathematical object used to represent the state of a physical system, describes a real, physical object, or merely the knowledge of an observer (that he uses to predict certain experimental outcomes, etc.) -- in the jargon, whether it is

As an analogy, one might think of a footprint in mud: the mud here being the observer's brain, while the foot is the quantum system (after all, feet are also colloquially called 'Quanten' in German...). After an interaction, the mud contains knowledge of the foot in the form of its imprint -- this form is physical, as is the altered state of an observer's brain after an interaction with a quantum system. By making a plaster cast, the form of the foot can be completely recovered. Of course, it is always possible that there might be a hidden reality beyond the footprint: such as the person the foot was (presumably) attached to. But this would only correspond to unobservable parts of reality.

Also, the wave function may be epistemic in the sense a probability distribution on phase space is: it may represent our ignorance regarding a more fundamental, ontic layer. The information about such a distribution is not contained in a single system; consequently, it can only exist in the brain after many interactions with physical systems. And indeed, one single throw of a coin does not tell you whether it is fair or biased. Regarding this line of explanation, however, a recent result by Pusey, Barrett, and Rudolph appears to rule out such a possibility (see this excellent explanation on Matt Leifer's blog).

So we see that, since information is physical, there is no clean break between 'epistemic' and 'ontic' views of the wave function; having or not having information about some physical system means being in a different physical state, and if we believe in the causal closure of the observable universe, then physical state transitions can only be effected by interactions between physical systems.

In order to acquire information about a quantum system, the observer then has to interact with it, and this interaction generates entanglement. One can thus no longer describe the observer and the object system separately, but must consider them both part of a larger, entangled system.

However, if we now take the point of view of the observer, this must mean that the description of the object system is, after measurement, no longer complete -- it acquires entropy, as a part of its information is now stored in the correlations to the observer. This gives us an origin for the entropy production in the apparent collapse process.

While the total system of observer and object thus may be in a zero-entropy state, and evolve without picking up any entropy, and thus, according to the linear dynamics as required, subjectively, it will look to the observer as if the system he measures picks up entropy in the course of measurement, merely as a result of increasing correlations between him and the measured system, and his own ignorance about the total state.

But is this actually true? Could the observer not somehow possess perfect self-knowledge, and thus, perfect knowledge of the complete system?

The startling answer to this question is: no, it is impossible for an observer to acquire perfect knowledge about any system he himself is part of (and thus, in particular about the system composed of himself and the object he measures). This is the essential content of a theorem due to Maria Dalla Chiara, elaborated upon by Thomas Breuer.

Interestingly, this is not a consequence of quantum-mechanical weirdness: the result exists just as well for entirely classical theories (though one could interpret it as implying that there are no entirely classical theories, if by 'classical theory' one means a theory in which it is in principle possible to acquire perfect knowledge about every observable). Essentially, it follows from the assumption of a theory's universal validity: if the theory applies equally well to observer and observed, then one necessarily encounters the problems of self-reference. In fact, it is essentially a Gödelian (diagonalization) argument by which it follows that an observer can't distinguish all states of a system he himself is part of.

This also poses a restriction on the thought experiment known as

Bringing it all together, now, we seem to have much of what we want from a theory in which an apparent collapse is realized entirely within the linear dynamics: the bare theory explains our apparently determinate experience, its continuity, and the agreement between different observers, while the entropy production is taken care of by the impossibility of perfect state self-knowledge.

I also want to re-iterate that I do not think of the bare theory as deceptive, as it is generally portrayed in the literature. The general point of view is that this way of thinking suggests that we are deceived into believing we have definite experience, while in fact, we typically don't. But being in a superposed state does not mean that any given outcome does not occur, and neither does it mean that it does occur -- it merely means that there is no fact of the matter regarding

Besides, we do have definite experience: the experience of having a definite experience is definite, regardless of whether the 'underlying' experience is. Again, all there is to experiences is how they seem to us; it simply makes no sense to claim that we are 'deceived about our experiences'. We experience what we experience, and the bare theory provides a mechanism for definite experience to emerge out of the indefinite quantum world.

It is thus a lot like the way I have argued definite laws emerge from indefinite, random fundamental dynamics. If we take a random bit string, it is utterly impossible to predict whether the next digit will be 1 or 0 -- it is completely lawless. Nevertheless, moving up a level, for any given bit string, I can predict the relative ratios of 1s and 0s, with a reliability that increases with the bit string's length. This lawfulness is not imposed; rather, it emerges directly from the more fundamental lack of laws.

In a similar sense, the definiteness of our experience emerges from the indefinite nature of quantum mechanics. There is no definiteness to the question whether or not we saw a 1 or a 0 as a result to a spin experiment on a superposed particle; but our experience of a definite result is definite. Indeed, one may take this as implying that without an observer, there is nothing to be observed; the observer and the observed are two sides of the same coin, the result of some specific way to slice the quantum cake.

Yet still, two questions seem to loom large: one is to give a satisfactory account for the probabilities encountered in quantum mechanics; the other is the apparent discrepancy between our experience of a definite phenomenal content, and the bare theory's prediction of an essentially 'contentless' phenomenology -- there is something definite, yet no further fact as to what, exactly, that is; but subjectively, these facts exist (and indeed, seem to be all that we have direct knowledge of).

I am not too sure the latter is actually a question. As I have already said, even if we did not have any definite phenomenal content, we would be asking the same questions: it would appear just as ludicrous to us to suggest that in fact, we don't have any definite experiences, while it is so immediately clear that we actually do.

Yet I can see that this argumentation, while perhaps satisfying on a certain level, leaves something visceral to be desired. Luckily, an idea due to Sven Aerts may provide an answer to both open problems.

Aerts essentially takes to heart the lessons from Dalla Chiara's and Breuer's results, and thus, considers the outcome of a measurement as a function not merely of the measured system, but rather, of the state of the system composed of both the observer and the observed. With this in mind, he considers a procedure to arrive at an outcome for a measurement that minimizes the influence of the observer, such that the selected outcome is that outcome for which it is the most probable that it pertains to the system under study.

Consider the process of observation as an observer assigning to the system he observes a certain experimental outcome, based on both his state and the state of the system. This assigns the observer a more active, participatory role than on the usual accounts of observation: he

If we now assume that the observer chooses the measurement outcome in such a way as to be maximally certain that the outcome pertains to the system -- Aerts calls such an observer 'Bayes-optimal' -- then one can show that the usual quantum outcomes and statistics are recovered. To such an observer, the world looks much like it would look to an observer in a collapse theory: definite outcomes with probabilities following the Born rule. This framework also provides a natural explanation for the origin of the probabilities: they quantify the observer's ignorance -- but not about the system he observes, but the irreducible ignorance about his own state.

I'm still not entirely convinced that this last addendum is strictly necessary to derive the appearance of our experience from the quantum formalism; however, those (understandably) uncomfortable with the notion of a definite-but-contentless experience may take recourse to this framework in order to justify their apparently definite experiences.

This is then the closest I can come to providing an answer to the question of how our definite, macroscopic world emerges from the quantum dynamics. The bare theory, and similarly, the Gödelian impossibility of perfect state self-knowledge, ensure that only a part of the quantum world is accessible to any observer; in this way, the appearance of a definite, repeatable, and communicable experience emerges. It is just the linear quantum dynamics that is necessary to account for our experience; we neither have to invent selection rules that break the quantum evolution to comfort us with an objective reality, nor do we have to postulate the existence of a plethora of worlds, populated with slightly different copies of each and every one of us. The observer arises, along with the appearance of the observed, out of itself from the quantum realm, just as regular, lawful behavior emerges from fundamental randomness.

I find this view to be immensely satisfying.

However, this might seem as just a parlor trick at first brush. Surely, the mere impression of being in a definite state can't lead to the richness of experience, of

*determinate*experience, we receive through our interaction with the world?And yet, more or less this is actually what I wish to argue for. First, let us tease out some more consequences of this crazy idea (which David Albert, who introduced it in his book

*Quantum Mechanics and Experience*under the name 'the bare theory', called 'amazingly cool').One particular consequence of the occurrence of a collapse is that if I repeat the same measurement, I will with certainty get the same result. If we believe an actual collapse has occurred, this is easily explained: the system now actually is in the state it collapsed to, and thus, a repeated measurement simply re-detects that state. When a collapse dynamics is absent, though, this agreement requires explanation. In many-worlds theories, this explanation is provided by the stipulation that the observer is now in a certain world, or branch, associated with a specific measurement outcome, and thus, a specific state of the system; so again, a repeated measurement only confirms this fact.

However, in the bare theory, no collapse happens, and no worlds are split -- all we have is the linear dynamics, and consequently, physical systems will rarely be in an eigenstate of having a particular property. So at first, it seems that if the first measurement did not reveal a definite property of the system, the second measurement has no hope of repeating the result. But it is in fact easy to see, that if the observer asks themselves whether or not they got the same measurement result as before, the answer will unambiguously be 'yes': after two measurements on the same system, which started out in a superposition, the general state will be |1!⟩|1!⟩|1⟩ + |0!⟩|0!⟩|0⟩; so in fact, any observation corresponding to the question 'Are both measurement outcomes identical?' will return the answer that they indeed are -- however, there will in general not be a fact of the matter regarding

*what those outcomes were*.This argument can easily be extended to cover more complicated cases -- say, if the observer first measures a system in the (definite) state |1⟩, then another system in the state |0⟩, and finally, a system in the superposition |0⟩ + |1⟩, and is then asked (or asks himself) whether his measurement result agrees with either measurement undertaken before, he will claim that this is indeed the case -- i.e. he will report (and believe) that the measurement result he got in the third case will be equal to either one of |1⟩ or |0⟩. Thus, there is no subjective distinction between measurements carried out on systems in definite states versus measurements carried out on systems in superposition -- their results will seem just as 'definite' in both cases, however, in the latter case, there won't be any actual matter of fact regarding the measurement outcome. But the subjective appraisal of the outcomes of the three measurements -- either two times 1, once 0, or the other way around -- will thus agree with what is expected in quantum mechanics with the collapse dynamics.

More generally, one can show that for infinitely many measurements, any observer will tend towards being in an eigenstate of believing to have made measurements with statistics equal to those given by ordinary quantum mechanics (see for example John Barrett's

*The Quantum Mechanics of Minds and Worlds*-- one of the best expositions of the problems and virtues of different Everettian interpretations, if not the best --, chapter 4). Of course, what exactly it means to believe something in the limit of infinite measurements is somewhat difficult to interpret -- not to presume too much about the reader's capabilities, but I am only capable of accomplishing distinctly finite tasks, and would therefore typically fail to have any definite belief about the statistics of my measurements at all.Another critical question is, given two observers, whether they will agree on the measurements results they have obtained. However, that this must be so is in fact shown by the same argumentation as before, where now the two measurements are not to be interpreted as repetitions by a single observer, but rather, as distinct observations, undertaken by different experimentators. And once again, it is the case that, while there is no definite matter of fact regarding what the measurement outcomes were, nevertheless both observers will agree that their measurement records coincide.

The bare theory thus explains three things with regards to our experience: its

*definiteness*, its*continuity*, and its*intersubjective coherence*. In other words, why, even though the world is typically in a severely superposed state, we nevertheless appear to have a definite experience; why this experience seems to be (more or less) the same from one moment to the next; and why my experience appears to agree with yours.One thing that it crucially does not seem to explain is the fact that we don't merely have

*some*definite experience, but that this experience has a well-defined*content*: I don't just experience undefined somethings, but concrete objects, out there in the world; I don't merely see*spin up or down*in any given experience, but either definitely*spin up*or*spin down.*As it says in the song, 'I see trees of green/ red roses, too', not 'I see*[some definite things]*/*[other definite things]*, too'!Failure to explain this basic fact of our most immediate experience seems quite outrageous. But nevertheless, consider how things would appear if they only appeared to us as if our experience had a definite content: I would raise these very same complaints, as certainly, I would be convinced of the definite content of my experience of the world! In other words, if, in an experiment, I would get the result

*spin up or down*, I would be utterly adamant that I did not, in fact, get this indefinite result, but a completely definite one -- while simultaneously failing to point definitely to either. Yet, this failure I would again be wholly ignorant about!This is certainly a very strange view, but, to me at least, not without its charm. And later on, for those entirely too uncomfortable with this sort of picture, I will give an argument to somewhat ameliorate the consequences of this idea. But for now, let's talk about some of the problems the bare theory, despite its astounding successes, faces.

**The Theory, Stripped**There are two common factors in almost all accounts of the bare theory that I am familiar with: 1) excitement about its 'amazingly cool' properties, and 2) its utter rejection as a resolution of the problems of quantum theory. There are several objections that are usually raised against the theory; I will only consider those two that I believe are most severe, for a good discussion of the rest, see again the already mentioned book by Barrett.

The first one is the accusation of

*empirical incoherence*, and I want to come right out here and admit that I'm not exactly sure I understand it. Basically, the argument is that the reason for postulating, and accepting, quantum theory are the results of certain measurements we have made. But, in the bare theory, those measurements typically do not have any definite result at all; thus, they can't be sufficient for us to accept quantum theory, much less the bare theory reading of it.This, to me, seems like an utter non-problem: while it is true that our measurements have no definite outcome, that fact itself, and how it comes to be that we nevertheless have a definite belief in their outcomes, may be taken as an empirical datum; and this datum is completely explained by the bare theory. The data that leads us to postulate quantum mechanics and the bare theory then is not the data created by measurements, but the data gained from our definite beliefs about these measurements -- to wit, that they have definite outcomes.

The second objection strikes me as more serious: if the bare theory is true, then typically, one would not expect the world to be in a state in which any given observer is conscious and ready to undertake some certain measurement. Rather, the typical state of the world would consist of an enormous superposition of many possible states for any observer, where he may either be asleep, or distracted when he intended to read the measurement result, or be home sick, or maybe even not exist at all.

Similarly, any given experiment does not have a neat, clean outcome as we have previously supposed, but typically, between the experiment yielding (or not) any certain outcome, there is also the possibility that the experiment may fail to work correctly, or blow up, or that a meteor strikes the lab, obliterating both the experiment and the poor experimentalist conducting it -- such that, after any given interaction, the observer is not merely in a superposition of having gotten one result or another, but also of not having gotten any result at all, or even of having died in the process. In which case, he can hardly 'ask himself' afterwards whether or not he has gotten a definite result -- the question does not make any sense if asked of a pile of meat scraps.

So, concretely, if the observer is after the experiment in a state of |0!⟩ + |1!⟩ + |Blown to bits!⟩, then acts on that with |Definite?⟩, we get the following evolution:

|Definite?⟩(|0!⟩ + |1!⟩ + |Blown to bits!⟩) → |Yes!⟩|0!⟩ + |Yes!⟩|1!⟩ + |Huh?!⟩|Blown to bits!⟩),

and consequently, the observer would fail to be in an eigenstate of 'believing to have made a definite observation', and, by the eigenvalue-eigenstate link, thus not have this belief. So, the bare theory apparently does not account for definite beliefs after all!

At first sight, this objection seems quite damning -- all the bare theory's 'amazingly cool' properties go out of the window, once one starts considering even slightly more realistic cases.

However, I believe that this argument is the relapse into a laboriously exorcised notion: that of the special nature of the observer in quantum mechanics, where observer here means 'human' or even 'conscious human'. In the above, we have assumed that it makes no sense to ask of the environment whether it is in a definite state, or has gotten a definite result. But ultimately, the environment -- broadly defined as anything that has a chance to enter in the above superposition -- is simply an observer, too. Or, the other way around, a human, conscious observer's belief is ultimately just a physical thing, a certain configuration of a physical system, i.e. the brain, as well -- just that some of these configurations correspond to states in which a certain person has a certain belief does not change anything about that.

So, after any given interaction, there exists a certain system -- a 'meta-observer' -- of which one can 'ask' the question whether it is in a definite state; and this whole system will then 'answer' in the affirmative. Only a subset of this system can meaningfully be considered as identifiable with the original experimenter; but to this subset, the 'yes' will mean that it is in a definite belief-state of having performed a measurement, and gotten a certain result. Only a part of the superposition of the 'meta-observer' can be regarded as having beliefs; but to that part, those beliefs appear definite.

In a certain sense, this seems to invite some of the many worlds back in through the backdoor -- and one could view it like that. For instance, both the observer experiencing his apparently definite measurement result and his being blown to bits would seem to have to be regarded as equally real, and certainly, one has difficulty imagining both being real in the same world. However, the number of such worlds is greatly reduced: rather than there being multiple 'copies' of the observer, experiencing each possible outcome in separate worlds, there is only one observer, in one world. Also, the number of 'worlds' is no absolute quantity, but depends on the resolution with which you view the system: just as there is only one observer, there is only one laboratory containing the observer, but on this level, several of the 'worlds' on the observer level -- those in which, for instance, the observer had a heart attack, but the laboratory as a whole was not damaged -- are now unified. The worlds depend on what is taken to be constant across them -- the existence of the observer, or the existence of the laboratory. But ultimately, at the level of the universe, there is only one single world; so it is questionable how justified talking about 'different worlds' is in this case.

There is one more interesting aspect to this idea of 'constancy across branches'. This involves the stability of meaningful information -- where 'meaningful' here just means being of a certain form for a certain reason. A key is of a certain form, because only this form opens the lock it is made to open. The key does something with the lock, and thus, it has meaning for the lock, and that meaning lies in the information about its form (a description of this form, sufficiently precise, would enable its receiver to construct an equivalent key).

Such meaningful information tends to be the same across many branches, whereas random sequences typically vary strongly. In his book

*The Fabric of Reality*(where he lobbies strongly for a many-worlds view of quantum mechanics), David Deutsch uses the example of coding vs. non-coding ('junk') sequences of DNA: a gene that is necessary for the functioning of an organism, say one which determines insulin production, is likely to be the same across many different branches, as changes, i.e. mutations, typically will be disadvantageous to the organism carrying it, leading to them being selected against. But junk sequences of DNA are copied independently of their utility, and thus, any change to them will typically have no effect on an organism's reproductive fitness.If thus any species in our 'branch' carried a coding DNA sequence that happens to be identical to a non-coding one, in other branches, the coding sequence typically will be the same -- being necessary for the species' presence in the first place -- while the non-coding one may vary wildly.

**Know Thyself**Of the features of an apparent collapse that the bare theory provides, so far we have not accounted for the entropy a measurement with definite outcome generates. Since, on the bare theory, measurements do not in fact have definite outcomes, it might seem that it can't possibly reproduce this aspect. But one can amend the theory easily to account for this.

In order to do that, recall the argument I have introduced in the previous post: a system, consisting of two entangled sub-systems, may be in a pure state, and consequently, have zero entropy. However, each of the sub-systems regarded on its own has a nonzero entropy, because in regarding only this system, one effectively discards the information contained in the correlations between the systems; and of course, hidden information always means entropy.

In fact, the amount of entropy is a measure for the amount of entanglement between the two sub-systems: the more entangled both are, the higher the entropy of each.

But now consider that measurement, i.e. the acquiring of information about a system (the object) by another system (the observer), is a physical process -- both systems must interact in order for it to take place. And in this interaction, entanglement is created -- indeed, entanglement can be viewed as the information about the total system not contained in either sub-system.

A minor digression. There is a certain controversy regarding whether the wave function, i.e. the mathematical object used to represent the state of a physical system, describes a real, physical object, or merely the knowledge of an observer (that he uses to predict certain experimental outcomes, etc.) -- in the jargon, whether it is

*ontic*or*epistemic*in nature. Considerations like the above show, in my opinion, that there is not much of a difference between the two. Certainly, information is physical; the brain of an observer having some knowledge thus is physically different from the brain of an observer lacking that knowledge. But this physical difference must have been acquired by the interaction with a physical system -- the quantum system under study (perhaps via appropriate intermediaries). So if this brain contains, in its physical configuration, the knowledge of the state of some quantum system, encoded in a wave function, and this wave function is in fact a complete specification of the system, then this information must have been both physically present in the system, and it must encapsulate the whole of the system -- but then, it is in one way or another identical to the system (or at least the observable part thereof).As an analogy, one might think of a footprint in mud: the mud here being the observer's brain, while the foot is the quantum system (after all, feet are also colloquially called 'Quanten' in German...). After an interaction, the mud contains knowledge of the foot in the form of its imprint -- this form is physical, as is the altered state of an observer's brain after an interaction with a quantum system. By making a plaster cast, the form of the foot can be completely recovered. Of course, it is always possible that there might be a hidden reality beyond the footprint: such as the person the foot was (presumably) attached to. But this would only correspond to unobservable parts of reality.

Also, the wave function may be epistemic in the sense a probability distribution on phase space is: it may represent our ignorance regarding a more fundamental, ontic layer. The information about such a distribution is not contained in a single system; consequently, it can only exist in the brain after many interactions with physical systems. And indeed, one single throw of a coin does not tell you whether it is fair or biased. Regarding this line of explanation, however, a recent result by Pusey, Barrett, and Rudolph appears to rule out such a possibility (see this excellent explanation on Matt Leifer's blog).

So we see that, since information is physical, there is no clean break between 'epistemic' and 'ontic' views of the wave function; having or not having information about some physical system means being in a different physical state, and if we believe in the causal closure of the observable universe, then physical state transitions can only be effected by interactions between physical systems.

In order to acquire information about a quantum system, the observer then has to interact with it, and this interaction generates entanglement. One can thus no longer describe the observer and the object system separately, but must consider them both part of a larger, entangled system.

However, if we now take the point of view of the observer, this must mean that the description of the object system is, after measurement, no longer complete -- it acquires entropy, as a part of its information is now stored in the correlations to the observer. This gives us an origin for the entropy production in the apparent collapse process.

While the total system of observer and object thus may be in a zero-entropy state, and evolve without picking up any entropy, and thus, according to the linear dynamics as required, subjectively, it will look to the observer as if the system he measures picks up entropy in the course of measurement, merely as a result of increasing correlations between him and the measured system, and his own ignorance about the total state.

But is this actually true? Could the observer not somehow possess perfect self-knowledge, and thus, perfect knowledge of the complete system?

The startling answer to this question is: no, it is impossible for an observer to acquire perfect knowledge about any system he himself is part of (and thus, in particular about the system composed of himself and the object he measures). This is the essential content of a theorem due to Maria Dalla Chiara, elaborated upon by Thomas Breuer.

Interestingly, this is not a consequence of quantum-mechanical weirdness: the result exists just as well for entirely classical theories (though one could interpret it as implying that there are no entirely classical theories, if by 'classical theory' one means a theory in which it is in principle possible to acquire perfect knowledge about every observable). Essentially, it follows from the assumption of a theory's universal validity: if the theory applies equally well to observer and observed, then one necessarily encounters the problems of self-reference. In fact, it is essentially a Gödelian (diagonalization) argument by which it follows that an observer can't distinguish all states of a system he himself is part of.

This also poses a restriction on the thought experiment known as

*Laplace's demon*: a sufficiently powerful intellect, in possession of complete knowledge and equipped with perfect reasoning skills could, in a deterministic universe, predict the future exactly from the current state of the world. But as we see now, such complete knowledge is impossible -- is, in fact, logically contradictory: since the demon must be part of the world to acquire information about it -- information is physical --, it is due to the above considerations impossible that he could perfectly know the state of the world. This introduces an apparent indeterminism, in the form of the demon's inability to make perfect predictions, into the theory.Bringing it all together, now, we seem to have much of what we want from a theory in which an apparent collapse is realized entirely within the linear dynamics: the bare theory explains our apparently determinate experience, its continuity, and the agreement between different observers, while the entropy production is taken care of by the impossibility of perfect state self-knowledge.

I also want to re-iterate that I do not think of the bare theory as deceptive, as it is generally portrayed in the literature. The general point of view is that this way of thinking suggests that we are deceived into believing we have definite experience, while in fact, we typically don't. But being in a superposed state does not mean that any given outcome does not occur, and neither does it mean that it does occur -- it merely means that there is no fact of the matter regarding

*which*outcome occurs. So it may very well simply be true that a definite outcome occurs, while there is just no fact determining which outcome occurs (perhaps this kind of 'ω-inconsistency' is just the price one has to pay for insisting our experience is complete, i.e. always definite...). Indeed, in a world with limited information content, this does not seem such a strange proposal, at least not to me.Besides, we do have definite experience: the experience of having a definite experience is definite, regardless of whether the 'underlying' experience is. Again, all there is to experiences is how they seem to us; it simply makes no sense to claim that we are 'deceived about our experiences'. We experience what we experience, and the bare theory provides a mechanism for definite experience to emerge out of the indefinite quantum world.

It is thus a lot like the way I have argued definite laws emerge from indefinite, random fundamental dynamics. If we take a random bit string, it is utterly impossible to predict whether the next digit will be 1 or 0 -- it is completely lawless. Nevertheless, moving up a level, for any given bit string, I can predict the relative ratios of 1s and 0s, with a reliability that increases with the bit string's length. This lawfulness is not imposed; rather, it emerges directly from the more fundamental lack of laws.

In a similar sense, the definiteness of our experience emerges from the indefinite nature of quantum mechanics. There is no definiteness to the question whether or not we saw a 1 or a 0 as a result to a spin experiment on a superposed particle; but our experience of a definite result is definite. Indeed, one may take this as implying that without an observer, there is nothing to be observed; the observer and the observed are two sides of the same coin, the result of some specific way to slice the quantum cake.

Yet still, two questions seem to loom large: one is to give a satisfactory account for the probabilities encountered in quantum mechanics; the other is the apparent discrepancy between our experience of a definite phenomenal content, and the bare theory's prediction of an essentially 'contentless' phenomenology -- there is something definite, yet no further fact as to what, exactly, that is; but subjectively, these facts exist (and indeed, seem to be all that we have direct knowledge of).

I am not too sure the latter is actually a question. As I have already said, even if we did not have any definite phenomenal content, we would be asking the same questions: it would appear just as ludicrous to us to suggest that in fact, we don't have any definite experiences, while it is so immediately clear that we actually do.

Yet I can see that this argumentation, while perhaps satisfying on a certain level, leaves something visceral to be desired. Luckily, an idea due to Sven Aerts may provide an answer to both open problems.

**B.Y.O.P. (Bring Your Own Phenomenology)**Aerts essentially takes to heart the lessons from Dalla Chiara's and Breuer's results, and thus, considers the outcome of a measurement as a function not merely of the measured system, but rather, of the state of the system composed of both the observer and the observed. With this in mind, he considers a procedure to arrive at an outcome for a measurement that minimizes the influence of the observer, such that the selected outcome is that outcome for which it is the most probable that it pertains to the system under study.

Consider the process of observation as an observer assigning to the system he observes a certain experimental outcome, based on both his state and the state of the system. This assigns the observer a more active, participatory role than on the usual accounts of observation: he

*chooses*, rather than*reveals*, a measurement outcome. But this is only to be expected in regimes where the coupling between the observer and the system is no longer negligible, i.e. where it no longer can be assumed, as is done in classical physics, that the observer passively receives information broadcasted by the system.If we now assume that the observer chooses the measurement outcome in such a way as to be maximally certain that the outcome pertains to the system -- Aerts calls such an observer 'Bayes-optimal' -- then one can show that the usual quantum outcomes and statistics are recovered. To such an observer, the world looks much like it would look to an observer in a collapse theory: definite outcomes with probabilities following the Born rule. This framework also provides a natural explanation for the origin of the probabilities: they quantify the observer's ignorance -- but not about the system he observes, but the irreducible ignorance about his own state.

I'm still not entirely convinced that this last addendum is strictly necessary to derive the appearance of our experience from the quantum formalism; however, those (understandably) uncomfortable with the notion of a definite-but-contentless experience may take recourse to this framework in order to justify their apparently definite experiences.

This is then the closest I can come to providing an answer to the question of how our definite, macroscopic world emerges from the quantum dynamics. The bare theory, and similarly, the Gödelian impossibility of perfect state self-knowledge, ensure that only a part of the quantum world is accessible to any observer; in this way, the appearance of a definite, repeatable, and communicable experience emerges. It is just the linear quantum dynamics that is necessary to account for our experience; we neither have to invent selection rules that break the quantum evolution to comfort us with an objective reality, nor do we have to postulate the existence of a plethora of worlds, populated with slightly different copies of each and every one of us. The observer arises, along with the appearance of the observed, out of itself from the quantum realm, just as regular, lawful behavior emerges from fundamental randomness.

I find this view to be immensely satisfying.

## Keine Kommentare:

## Kommentar veröffentlichen