Schrödinger’s Cat

Erwin Schrödinger describes the context for his thought experiment with a cat:

The other alternative consists of granting reality only to the momentarily sharp determining parts – or in more general terms to each variable a sort of realization just corresponding to the quantum mechanical statistics of this variable at the relevant moment.

That it is in fact not impossible to express the degree and kind of blurring of all variables in one perfectly clear concept follows at once from the fact that Q.M. as a matter of fact has and uses such an instrument, the so-called wave function or psi-function, also called system vector. Much more is to be said about it further on. That it is an abstract, unintuitive mathematical construct is a scruple that almost always surfaces against new aids to thought and that carries no great message. At all events it is an imagined entity that images the blurring of all variables at every moment just as clearly and faithfully as does the classical model its sharp numerical values. Its equation of motion too, the law of its time variation, so long as the system is left undisturbed, lags not one iota, in clarity and determinacy, behind the equations of motion of the classical model. So the latter could be straight-forwardly replaced by the psi-function, so long as the blurring is confined to atomic scale, not open to direct control. In fact the function has provided quite intuitive and convenient ideas, for instance the “cloud of negative electricity” around the nucleus, etc. But serious misgivings arise if one notices that the uncertainty affects macroscopically tangible and visible things, for which the term “blurring” seems simply wrong. The state of a radioactive nucleus is presumably blurred in such a degree and fashion that neither the instant of decay nor the direction, in which the emitted alpha-particle leaves the nucleus, is well-established. Inside the nucleus, blurring doesn’t bother us. The emerging particle is described, if one wants to explain intuitively, as a spherical wave that continuously emanates in all directions and that impinges continuously on a surrounding luminescent screen over its full expanse. The screen however does not show a more or less constant uniform glow, but rather lights up at one instant at one spot – or, to honor the truth, it lights up now here, now there, for it is impossible to do the experiment with only a single radioactive atom. If in place of the luminescent screen one uses a spatially extended detector, perhaps a gas that is ionised by the alpha-particles, one finds the ion pairs arranged along rectilinear columns, that project backwards on to the bit of radioactive matter from which the alpha-radiation comes (C.T.R. Wilson’s cloud chamber tracks, made visible by drops of moisture condensed on the ions).

One can even set up quite ridiculous cases. A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in a Geiger counter there is a tiny bit of radioactive substance, so small, that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The psi-function of the entire system would express this by having in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts.

It is typical of these cases that an indeterminacy originally restricted to the atomic domain becomes transformed into macroscopic indeterminacy, which can then be resolved by direct observation. That prevents us from so naively accepting as valid a “blurred model” for representing reality. In itself it would not embody anything unclear or contradictory. There is a difference between a shaky or out-of-focus photograph and a snapshot of clouds and fog banks.

We see here the two elements described at the end of this earlier post. The psi-function is deterministic, but there seems to be an element of randomness when someone comes to check on the cat.

Hugh Everett amusingly describes a similar experiment performed on human beings (but without killing anyone):

Isolated somewhere out in space is a room containing an observer, A, who is about to perform a measurement upon a system S. After performing his measurement he will record the result in his notebook. We assume that he knows the state function of S (perhaps as a result of previous measurement), and that it is not an eigenstate of the measurement he is about to perform. A, being an orthodox quantum theorist, then believes that the outcome of his measurement is undetermined and that the process is correctly described by Process 1 [namely a random determination caused by measurement].

In the meantime, however, there is another observer, B, outside the room, who is in possession of the state function of the entire room, including S, the measuring apparatus, and A, just prior to the measurement. B is only interested in what will be found in the notebook one week hence, so he computes the state function of the room for one week in the future according to Process 2 [namely the deterministic  wave function]. One week passes, and we find B still in possession of the state function of the room, which this equally orthodox quantum theorist believes to be a complete description of the room and its contents. If B’s state function calculation tells beforehand exactly what is going to be in the notebook, then A is incorrect in his belief about the indeterminacy of the outcome of his measurement. We therefore assume that B’s state function contains non-zero amplitudes over several of the notebook entries.

At this point, B opens the door to the room and looks at the notebook (performs his observation.) Having observed the notebook entry, he turns to A and informs him in a patronizing manner that since his (B’s) wave function just prior to his entry into the room, which he knows to have been a complete description of the room and its contents, had non-zero amplitude over other than the present result of the measurement, the result must have been decided only when B entered the room, so that A, his notebook entry, and his memory about what occurred one week ago had no independent objective existence until the intervention by B. In short, B implies that A owes his present objective existence to B’s generous nature which compelled him to intervene on his behalf. However, to B’s consternation, A does not react with anything like the respect and gratitude he should exhibit towards B, and at the end of a somewhat heated reply, in which A conveys in a colorful manner his opinion of B and his beliefs, he rudely punctures B’s ego by observing that if B’s view is correct, then he has no reason to feel complacent, since the whole present situation may have no objective existence, but may depend upon the future actions of yet another observer.

Schrödinger’s problem was that the wave equation seems to describe something “blurred,” but if we assume that is because something blurred exists, it seems to contradict our experience which is of something quite distinct: a live cat or a dead cat, but not something in between.

Everett proposes that his interpretation of quantum mechanics is able to resolve this difficulty. After presenting other interpretations, he proposes his own (“Alternative 5”):

Alternative 5: To assume the universal validity of the quantum description, by the complete abandonment of Process 1 [again, this was the apparently random measurement process]. The general validity of pure wave mechanics, without any statistical assertions, is assumed for all physical systems, including observers and measuring apparata. Observation processes are to be described completely by the state function of the composite system which includes the observer and his object-system, and which at all times obeys the wave equation (Process 2).

It is evident that Alternative 5 is a theory of many advantages. It has the virtue of logical simplicity and it is complete in the sense that it is applicable to the entire universe. All processes are considered equally (there are no “measurement processes” which play any preferred role), and the principle of psycho-physical parallelism is fully maintained. Since the universal validity of the state function is asserted, one can regard the state functions themselves as the fundamental entities, and one can even consider the state function of the whole universe. In this sense this theory can be called the theory of the “universal wave function,” since all of physics is presumed to follow from this function alone. There remains, however, the question whether or not such a theory can be put into correspondence with our experience.

This present thesis is devoted to showing that this concept of a universal wave mechanics, together with the necessary correlation machinery for its interpretation, forms a logically self consistent description of a universe in which several observers are at work.

Ultimately, Everett’s response to Schrödinger is that the cat is indeed “blurred,” and that this never goes away. When someone checks on the cat, the person checking is also “blurred,” becoming a composite of someone seeing a dead cat and someone seeing a live cat. However, these are in effect two entirely separate worlds, one in which someone sees a live cat, and one in which someone sees a dead cat.

Everett mentions “the necessary correlation machinery for its interpretation,” because a mathematical theory of physics as such does not necessarily say that anyone should see anything in particular. So for example when Newton when says that there is a gravitational attraction between masses inversely proportional to the square of their distance, what exactly should we expect to see, given that? Obviously there is no way to answer this without adding something, and ultimately we need to add something non-mathematical, namely something about the way our experiences work.

I will not pretend to judge whether or not Everett does a good job defending his position. There is an interesting point here, whether or not his defense is ultimately a good one. “Orthodox” quantum mechanics, as Everett calls it, only gives statistical predictions about the future, and as long as nothing is added to the theory, it implies that deterministic predictions are impossible. It follows that if the position in our last post, on an open future, was correct, it must be possible to explain the results of quantum mechanics in terms of many worlds or multiple timelines. And I do not merely mean that we can give the same predictions with a one-world account or with a many world account. I mean that there must be a many-world account such that its contents are metaphysically identical to the contents of a one-world account with an open future.

This would nonetheless leave undetermined the question of what sort of account would be most useful to us in practice.

Advertisements

Miracles and Anomalies: Or, Your Religion is False

In 2011 there was an apparent observation of neutrinos traveling faster than light. Wikipedia says of this, “Even before the mistake was discovered, the result was considered anomalous because speeds higher than that of light in a vacuum are generally thought to violate special relativity, a cornerstone of the modern understanding of physics for over a century.” In other words, most scientists did not take the result very seriously, even before any specific explanation was found. As I stated here, it is possible to push unreasonably far in this direction, in such a way that one will be reluctant to ever modify one’s current theories. But there is also something reasonable about this attitude.

Alexander Pruss explains why scientists tend to be skeptical of such anomalous results in this post on Bayesianism and anomaly:

One part of the problem of anomaly is this. If a well-established scientific theory seems to predict something contrary to what we observe, we tend to stick to the theory, with barely a change in credence, while being dubious of the auxiliary hypotheses. What, if anything, justifies this procedure?

Here’s my setup. We have a well-established scientific theory T and (conjoined) auxiliary hypotheses A, and T together with A uncontroversially entails the denial of some piece of observational evidence E which we uncontroversially have (“the anomaly”). The auxiliary hypotheses will typically include claims about the experimental setup, the calibration of equipment, the lack of further causal influences, mathematical claims about the derivation of not-E from T and the above, and maybe some final catch-all thesis like the material conditional that if T and all the other auxiliary hypotheses obtain, then E does not obtain.

For simplicity I will suppose that A and T are independent, though of course that simplifying assumption is rarely true.

Here’s a quick and intuitive thought. There is a region of probability space where the conjunction of T and A is false. That area is divided into three sub-regions:

  1. T is true and A is false
  2. T is false and A is true
  3. both are false.

The initial probabilities of the three regions are, respectively, 0.0999, 0.0009999 and 0.0001. We know we are in one of these three regions, and that’s all we now know. Most likely we are in the first one, and the probability that we are in that one given that we are in one of the three is around 0.99. So our credence in T has gone down from three nines (0.999) to two nines (0.99), but it’s still high, so we get to hold on to T.

Still, this answer isn’t optimistic. A move from 0.999 to 0.99 is actually an enormous decrease in confidence.

“This answer isn’t optimistic,” because in the case of the neutrinos, this analysis would imply that scientists should have instantly become ten times more willing to consider the possibility that the theory of special relativity is false. This is surely not what happened.

Pruss therefore presents an alternative calculation:

But there is a much more optimistic thought. Note that the above wasn’t a real Bayesian calculation, just a rough informal intuition. The tip-off is that I said nothing about the conditional probabilities of E on the relevant hypotheses, i.e., the “likelihoods”.

Now setup ensures:

  1. P(E|A ∧ T)=0.

What can we say about the other relevant likelihoods? Well, if some auxiliary hypothesis is false, then E is up for grabs. So, conservatively:

  1. P(E|∼A ∧ T)=0.5
  2. P(E|∼A ∧ ∼T)=0.5

But here is something that I think is really, really interesting. I think that in typical cases where T is a well-established scientific theory and A ∧ T entails the negation of E, the probability P(E|A ∧ ∼T) is still low.

The reason is that all the evidence that we have gathered for T even better confirms the hypothesis that T holds to a high degree of approximation in most cases. Thus, even if T is false, the typical predictions of T, assuming they have conservative error bounds, are likely to still be true. Newtonian physics is false, but even conditionally on its being false we take individual predictions of Newtonian physics to have a high probability. Thus, conservatively:

  1. P(E|A ∧ ∼T)=0.1

Very well, let’s put all our assumptions together, including the ones about A and T being independent and the values of P(A) and P(T). Here’s what we get:

  1. P(E|T)=P(E|A ∧ T)P(A|T)+P(E|∼A ∧ T)P(∼A|T)=0.05
  2. P(E|∼T)=P(E|A ∧ ∼T)P(A|∼T)+P(E|∼A ∧ ∼T)P(∼A|∼T) = 0.14.

Plugging this into Bayes’ theorem, we get P(T|E)=0.997. So our credence has crept down, but only a little: from 0.999 to 0.997. This is much more optimistic (and conservative) than the big move from 0.999 to 0.99 that the intuitive calculation predicted.

So, if I am right, at least one of the reasons why anomalies don’t do much damage to scientific theories is that when the scientific theory T is well-confirmed, the anomaly is not only surprising on the theory, but it is surprising on the denial of the theory—because the background includes the data that makes T “well-confirmed” and would make E surprising even if we knew that T was false.

To make the point without the mathematics (which in any case is only used to illustrate the point, since Pruss is choosing the specific values himself), if you have a theory which would make the anomaly probable, that theory would be strongly supported by the anomaly. But we already know that theories like that are false, because otherwise the anomaly would not be an anomaly. It would be normal and common. Thus all of the actually plausible theories still make the anomaly an improbable observation, and therefore these theories are only weakly supported by the observation of the anomaly. The result is that the new observation makes at most a minor difference to your previous opinion.

We can apply this analysis to the discussion of miracles. David Hume, in his discussion of miracles, seems to desire a conclusive proof against them which is unobtainable, and in this respect he is mistaken. But near the end of his discussion, he brings up the specific topic of religion and says that his argument applies to it in a special way:

Upon the whole, then, it appears, that no testimony for any kind of miracle has ever amounted to a probability, much less to a proof; and that, even supposing it amounted to a proof, it would be opposed by another proof; derived from the very nature of the fact, which it would endeavour to establish. It is experience only, which gives authority to human testimony; and it is the same experience, which assures us of the laws of nature. When, therefore, these two kinds of experience are contrary, we have nothing to do but subtract the one from the other, and embrace an opinion, either on one side or the other, with that assurance which arises from the remainder. But according to the principle here explained, this subtraction, with regard to all popular religions, amounts to an entire annihilation; and therefore we may establish it as a maxim, that no human testimony can have such force as to prove a miracle, and make it a just foundation for any such system of religion.

The idea seems to be something like this: contrary systems of religion put forth miracles in their support, so the supporting evidence for one religion is more or less balanced by the supporting evidence for the other. Likewise, the evidence is weakened even in itself by people’s propensity to lies and delusion in such matters (some of this discussion was quoted in the earlier post on Hume and miracles). But in addition to the fairly balanced evidence we have experience basically supporting the general idea that the miracles do not happen. This is not outweighed by anything in particular, and so it is the only thing that remains after the other evidence balances itself out of the equation. Hume goes on:

I beg the limitations here made may be remarked, when I say, that a miracle can never be proved, so as to be the foundation of a system of religion. For I own, that otherwise, there may possibly be miracles, or violations of the usual course of nature, of such a kind as to admit of proof from human testimony; though, perhaps, it will be impossible to find any such in all the records of history. Thus, suppose, all authors, in all languages, agree, that, from the first of January, 1600, there was a total darkness over the whole earth for eight days: suppose that the tradition of this extraordinary event is still strong and lively among the people: that all travellers, who return from foreign countries, bring us accounts of the same tradition, without the least variation or contradiction: it is evident, that our present philosophers, instead of doubting the fact, ought to receive it as certain, and ought to search for the causes whence it might be derived. The decay, corruption, and dissolution of nature, is an event rendered probable by so many analogies, that any phenomenon, which seems to have a tendency towards that catastrophe, comes within the reach of human testimony, if that testimony be very extensive and uniform.

But suppose, that all the historians who treat of England, should agree, that, on the first of January, 1600, Queen Elizabeth died; that both before and after her death she was seen by her physicians and the whole court, as is usual with persons of her rank; that her successor was acknowledged and proclaimed by the parliament; and that, after being interred a month, she again appeared, resumed the throne, and governed England for three years: I must confess that I should be surprised at the concurrence of so many odd circumstances, but should not have the least inclination to believe so miraculous an event. I should not doubt of her pretended death, and of those other public circumstances that followed it: I should only assert it to have been pretended, and that it neither was, nor possibly could be real. You would in vain object to me the difficulty, and almost impossibility of deceiving the world in an affair of such consequence; the wisdom and solid judgment of that renowned queen; with the little or no advantage which she could reap from so poor an artifice: all this might astonish me; but I would still reply, that the knavery and folly of men are such common phenomena, that I should rather believe the most extraordinary events to arise from their concurrence, than admit of so signal a violation of the laws of nature.

But should this miracle be ascribed to any new system of religion; men, in all ages, have been so much imposed on by ridiculous stories of that kind, that this very circumstance would be a full proof of a cheat, and sufficient, with all men of sense, not only to make them reject the fact, but even reject it without farther examination. Though the Being to whom the miracle is ascribed, be, in this case, Almighty, it does not, upon that account, become a whit more probable; since it is impossible for us to know the attributes or actions of such a Being, otherwise than from the experience which we have of his productions, in the usual course of nature. This still reduces us to past observation, and obliges us to compare the instances of the violation of truth in the testimony of men, with those of the violation of the laws of nature by miracles, in order to judge which of them is most likely and probable. As the violations of truth are more common in the testimony concerning religious miracles, than in that concerning any other matter of fact; this must diminish very much the authority of the former testimony, and make us form a general resolution, never to lend any attention to it, with whatever specious pretence it may be covered.

Notice how “unfair” this seems to religion, so to speak. What is the difference between the eight days of darkness, which Hume would accept, under those conditions, and the resurrection of the queen of England, which he would not? Hume’s reaction to the two situations is more consistent than first appears. Hume would accept the historical accounts about England in the same way that he would accept the accounts about the eight days of darkness. The difference is in how he would explain the accounts. He says of the darkness, “It is evident, that our present philosophers, instead of doubting the fact, ought to receive it as certain, and ought to search for the causes whence it might be derived.” Likewise, he would accept the historical accounts as certain insofar as they say the a burial ceremony took place, the queen was absent from public life, and so on. But he would not accept that the queen was dead and came back to life. Why? The “search for the causes” seems to explain this. It is plausible to Hume that causes of eight days of darkness might be found, but not plausible to him that causes of a resurrection might be found. He hints at this in the words, “The decay, corruption, and dissolution of nature, is an event rendered probable by so many analogies,” while in contrast a resurrection would be “so signal a violation of the laws of nature.”

It is clear that Hume excludes certain miracles, such as resurrection, from the possibility of being established by the evidence of testimony. But he makes the additional point that even if he did not exclude them, he would not find it reasonable to establish a “system of religion” on such testimony, given that “violations of truth are more common in the testimony concerning religious miracles, than in that concerning any other matter of fact.”

It is hard to argue with the claim that “violations of truth” are especially common in testimony about miracles. But does any of this justify Hume’s negative attitude to miracles as establishing “systems of religion,” or is this all just prejudice?  There might well be a good deal of prejudice involved here in his opinions. Nonetheless, Alexander Pruss’s discussion of anomaly allows one to formalize Hume’s idea here as actual insight as well.

One way to look at truth in religion is to look at it as a way of life or as membership in a community. And in this way, asking whether miracles can establish a system of religion is just asking whether a person can be moved to a way of life or to join a community through such things. And clearly this is possible, and often happens. But another way to consider truth in religion is to look at a doctrinal system as a set of claims about how the world is. Looked at in this way, we should look at a doctrinal system as presenting a proposed larger context of our place in the world, one that we would be unaware of without the religion. This implies that one should have a prior probability (namely prior to consideration of arguments in its favor) strongly against the system considered as such, for reasons very much like the reasons we should have a prior probability strongly against Ron Conte’s predictions.

We can thus apply Alexander Pruss’s framework. Let us take Mormonism as the “system of religion” in question. Then taken as a set of claims about the world, our initial probability would be that it is very unlikely that the world is set up this way. Then let us take a purported miracle establishing this system: Joseph Smith finds his golden plates. In principle, if this cashed out in a certain way, it could actually establish his system. But it doesn’t cash out that way. We know very little about the plates, the circumstances of their discovery (if there was any), and their actual content. Instead, what we are left with is an anomaly: something unusual happened, and it might be able to be described as “finding golden plates,” but that’s pretty much all we know.

Then we have the theory, T, which has a high prior probability: Mormonism is almost certainly false. We have the observation : Joseph Smith discovered his golden plates (in one sense or another.) And we have the auxiliary hypotheses which imply that he could not have discovered the plates if Mormonism is false. The Bayesian updates in Pruss’s scheme imply that our conclusion is this: Mormonism is almost certainly false, and there is almost certainly an error in the auxiliary hypotheses that imply he could not have discovered them if it were false.

Thus Hume’s attitude is roughly justified: he should not change his opinion about religious systems in any significant way based on testimony about miracles.

To make you feel better, this does not prove that your religion is false. It just nearly proves that. In particular, this does not take into an account an update based on the fact that “many people accept this set of claims.” This is a different fact, and it is not an anomaly. If you update on this fact and end up with a non-trivial probability that your set of claims is true, testimony about miracles might well strengthen this into conviction.

I will respond to one particular objection, however. Some will take this argument to be stubborn and wicked, because it seems to imply that people shouldn’t be “convinced even if someone rises from the dead.” And this does in fact follow, more or less. An anomalous occurrence in most cases will have a perfectly ordinary explanation in terms of things that are already a part of our ordinary understanding of the world, without having to add some larger context. For example, suppose you heard your fan (as a piece of furniture, not as a person) talking to you. You might suppose that you were hallucinating. But suppose it turns out that you are definitely not hallucinating. Should you conclude that there is some special source from outside the normal world that is communicating with you? No: the fan scenario can happen, and it turns out to have a perfectly everyday explanation. We might agree with Hume that it would be much more implausible that a resurrection would have an everyday explanation. Nonetheless, even if we end up concluding to the existence of some larger context, and that the miracle has no such everyday explanation, there is no good reason for it to be such and such a specific system of doctrine. Consider again Ron Conte’s predictions for the future. Most likely the things that happen between now and 2040, and even the things that happen in the 2400s, are likely to be perfectly ordinary (although the things in the 2400s might differ from current events in fairly radical ways). But even if they are not, and even if apocalyptic, miraculous occurrences are common in those days, this does not raise the probability of Conte’s specific predictions above any trivial level. In the same way, the anomalous occurrences involved in the accounts of miracles will not lend any significant probability to a religious system.

The objection here is that this seems unfair to God, so to speak. What if God wanted to reveal something to the world? What could he do, besides work miracles? I won’t propose a specific answer to this, because I am not God. But I will illustrate the situation with a little story to show that there is nothing unfair to God about it.

Suppose human beings created an artificial intelligence and raised it in a simulated environment. Wanting things to work themselves out “naturally,” so to speak, because it would be less work, and because it would probably be necessary to the learning process, they institute “natural laws” in the simulated world which are followed in an exceptionless way. Once the AI is “grown up”, so to speak, they decide to start communicating with it. In the AI’s world, this will surely show up as some kind of miracle: something will happen that was utterly unpredictable to it, and which is completely inconsistent with the natural laws as it knew them.

Will the AI be forced by the reasoning of this post to ignore the communication? Well, that depends on what exactly occurs and how. At the end of his post, Pruss discusses situations where anomalous occurrences should change your mind:

Note that this argument works less well if the anomalous case is significantly different from the cases that went into the confirmation of T. In such a case, there might be much less reason to think E won’t occur if T is false. And that means that anomalies are more powerful as evidence against a theory the more distant they are from the situations we explored before when we were confirming T. This, I think, matches our intuitions: We would put almost no weight in someone finding an anomaly in the course of an undergraduate physics lab—not just because an undergraduate student is likely doing it (it could be the professor testing the equipment, though), but because this is ground well-gone over, where we expect the theory’s predictions to hold even if the theory is false. But if new observations of the center of our galaxy don’t fit our theory, that is much more compelling—in a regime so different from many of our previous observations, we might well expect that things would be different if our theory were false.

And this helps with the second half of the problem of anomaly: How do we keep from holding on to T too long in the light of contrary evidence, how do we allow anomalies to have a rightful place in undermining theories? The answer is: To undermine a theory effectively, we need anomalies that occur in situations significantly different from those that have already been explored.

If the AI finds itself in an entirely new situation, e.g. rather than hearing an obscure voice from a fan, it is consistently able to talk to the newly discovered occupant of the world on a regular basis, it will have no trouble realizing that its situation has changed, and no difficulty concluding that it is receiving communication from its author. This does, sort of, give one particular method that could be used to communicate a revelation. But there might well be many others.

Our objector will continue. This is still not fair. Now you are saying that God could give a revelation but that if he did, the world would be very different from the actual world. But what if he wanted to give a revelation in the actual world, without it being any different from the way it is? How could he convince you in that case?

Let me respond with an analogy. What if the sky were actually red like the sky of Mars, but looked blue like it is? What would convince you that it was red? The fact that there is no way to convince you that it is red in our actual situation means you are unfairly prejudiced against the redness of the sky.

In other words, indeed, I am unwilling to be convinced that the sky is red except in situations where it is actually red, and those situations are quite different from our actual situation. And indeed, I am unwilling to be convinced of a revelation except in situations where there is actually a revelation, and those are quite different from our actual situation.

Common Sense

I have tended to emphasize common sense as a basic source in attempting to philosophize or otherwise understand reality. Let me explain what I mean by the idea of common sense.

The basic idea is that something is common sense when everyone agrees that something is true. If we start with this vague account, something will be more definitively common sense to the degree that it is truer that everyone agrees, and likewise to the degree that it is truer that everyone agrees.

If we consider anything that one might think of as a philosophical view, we will find at least a few people who disagree, at least verbally, with the claim. But we may be able to find some that virtually everyone agrees with. These pertain more to common sense than things that fewer people agree with. Likewise, if we consider everyday claims rather than philosophical ones, we will probably be able to find things that everyone agrees with apart from some very localized contexts. These pertain even more to common sense. Likewise, if everyone has always agreed with something both in the past and present, that pertains more to common sense than something that everyone agrees with in the present, but where some have disagreed in the past.

It will be truer that everyone agrees in various ways: if everyone is very certain of something, that pertains more to common sense than something people are less certain about. If some people express disagreement with a view, but everyone’s revealed preferences or beliefs indicate agreement, that can be said to pertain to common sense to some degree, but not so much as where verbal affirmations and revealed preferences and beliefs are aligned.

Naturally, all of this is a question of vague boundaries: opinions are more or less a matter of common sense. We cannot sort them into two clear categories of “common sense” and “not common sense.” Nonetheless, we would want to base our arguments, as much as possible, on things that are more squarely matters of common sense.

We can raise two questions about this. First, is it even possible? Second, why do it?

One might object that the proposal is impossible. For no one can really reason except from their own opinions. Otherwise, one might be formulating a chain of argument, but it is not one’s own argument or one’s own conclusion. But this objection is easily answered. In the first place, if everyone agrees on something, you probably agree yourself, and so reasoning from common sense will still be reasoning from your own opinions. Second, if you don’t personally agree, since belief is voluntary, you are capable of agreeing if you choose, and you probably should, for reasons which will be explained in answering the second question.

Nonetheless, the objection is a reasonable place to point out one additional qualification. “Everyone agrees with this” is itself a personal point of view that someone holds, and no one is infallible even with respect to this. So you might think that everyone agrees, while in fact they do not. But this simply means that you have no choice but to do the best you can in determining what is or what is not common sense. Of course you can be mistaken about this, as you can about anything.

Why argue from common sense? I will make two points, a practical one and a theoretical one. The practical point is that if your arguments are public, as for example this blog, rather than written down in a private journal, then you presumably want people to read them and to gain from them in some way. The more you begin from common sense, the more profitable your thoughts will be in this respect. More people will be able to gain from your thoughts and arguments if more people agree with the starting points.

There is also a theoretical point. Consider the statement, “The truth of a statement never makes a person more likely to utter it.” If this statement were true, no one could ever utter it on account of its truth, but only for other reasons. So it is not something that a seeker of truth would ever say. On the other hand, there can be no doubt that the falsehood of some statements, on some occasions, makes those statements more likely to be affirmed by some people. Nonetheless, the nature of language demands that people have an overall tendency, most of the time and in most situations, to speak the truth. We would not be able to learn the meaning of a word without it being applied accurately, most of the time, to the thing that it means. In fact, if everyone was always uttering falsehoods, we would simply learn that “is” means “is not,” and that “is not,” means “is,” and the supposed falsehoods would not be false in the language that we would acquire.

It follows that greater agreement that something is true, other things being equal, implies that the thing is more likely to be actually true. Stones have a tendency to fall down: so if we find a great collection of stones, the collection is more likely to be down at the bottom of a cliff rather than perched precisely on the tip of a mountain. Likewise, people have a tendency to utter the truth, so a great collection of agreement suggests something true rather than something false.

Of course, this argument depends on “other things being equal,” which is not always the case. It is possible that most people agree on something, but you are reasonably convinced that they are mistaken, for other reasons. But if this is the case, your arguments should depend on things that they would agree with even more strongly than they agree with the opposite of your conclusion. In other words, it should be based on things which pertain even more to common sense. Suppose it does not: ultimately the very starting point of your argument is something that everyone else agrees is false. This will probably be an evident insanity from the beginning, but let us suppose that you find it reasonable. In this case, Robin Hanson’s result discussed here implies that you must be convinced that you were created in very special circumstances which would guarantee that you would be right, even though no one else was created in these circumstances. There is of course no basis for such a conviction. And our ability to modify our priors, discussed there, implies that the reasonable behavior is to choose to agree with the priors of common sense, if we find our natural priors departing from them, except in cases where the disagreement is caused by agreement with even stronger priors of common sense. Thus for example in this post I gave reasons for disagreeing with our natural prior on the question, “Is this person lying or otherwise deceived?” in some cases. But this was based on mathematical arguments that are even more convincing than that natural prior.

Lies, Religion, and Miscalibrated Priors

In a post from some time ago, Scott Alexander asks why it is so hard to believe that people are lying, even in situations where it should be obvious that they made up the whole story:

The weird thing is, I know all of this. I know that if a community is big enough to include even a few liars, then absent a strong mechanism to stop them those lies should rise to the top. I know that pretty much all of our modern communities are super-Dunbar sized and ought to follow that principle.

And yet my System 1 still refuses to believe that the people in those Reddit threads are liars. It’s actually kind of horrified at the thought, imagining them as their shoulders slump and they glumly say “Well, I guess I didn’t really expect anyone to believe me”. I want to say “No! I believe you! I know you had a weird experience and it must be hard for you, but these things happen, I’m sure you’re a good person!”

If you’re like me, and you want to respond to this post with “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?”, then before you comment take a second to ask why the “they’re lying” theory is so hard to believe. And when you figure it out, tell me, because I really want to know.

The strongest reason for this effect is almost certainly a moral reason. In an earlier post, I discussed St. Thomas’s explanation for why one should give a charitable interpretation to someone’s behavior, and in a follow up, I explained the problem of applying that reasoning to the situation of judging whether a person is lying or not. St. Thomas assumes that the bad consequences of being mistaken about someone’s moral character will be minor, and most of the time this is true. But if we asking the question, “are they telling the truth or are they lying?”, the consequences can sometimes be very serious if we are mistaken.

Whether or not one is correct in making this application, it is not hard to see that this is the principal answer to Scott’s question. It is hard to believe the “they’re lying” theory not because of the probability that they are lying, but because we are unwilling to risk injuring someone with our opinion. This is without doubt a good motive from a moral standpoint.

But if you proceed to take this unwillingness as a sign of the probability that they are telling the truth, this would be a demonstrably miscalibrated probability assignment. Consider a story on Quora which makes a good example of Scott’s point:

I shuffled a deck of cards and got the same order that I started with.

No I am not kidding and its not because I can’t shuffle.

Let me just tell the story of how it happened. I was on a trip to Europe and I bought a pack of playing cards at the airport in Madrid to entertain myself on the flight back to Dallas.

It was about halfway through the flight after I’d watched Pixels twice in a row (That s literally the only reason I even remembered this) And I opened my brand new Real Madrid Playing Cards and I just shuffled them for probably like 30 minutes doing different tricks that I’d learned at school to entertain myself and the little girl sitting next to me also found them to be quite cool.

I then went to look at the other sides of the cards since they all had a picture of the Real Madrid player with the same number on the back. That’s when I realized that they were all in order. I literally flipped through the cards and saw Nacho-Fernandes, Ronaldo, Toni Kroos, Karim Benzema and the rest of the team go by all in the perfect order.

Then a few weeks ago when we randomly started talking about Pixels in AP Statistics I brought up this story and my teacher was absolutely amazed. We did the math and the amount of possibilities when shuffling a deck of cards is 52! Meaning 52 x 51 x 50 x 49 x 48….

There were 8.0658175e+67 different combinations of cards that I could have gotten. And I managed to get the same one twice.

The lack of context here might make us more willing to say that Arman Razaali is lying, compared to Scott’s particular examples. Nonetheless, I think a normal person will feel somewhat unwilling to say, “he’s lying, end of story.” I certainly feel that myself.

It does not take many shuffles to essentially randomize a deck. Consequently if Razaali’s statement that he “shuffled them for probably like 30 minutes” is even approximately true, 1 in 52! is probably a good estimate of the chance of the outcome that he claims, if we assume that it happened by chance. It might be some orders of magnitude less since there might be some possibility of “unshuffling.” I do not know enough about the physical process of shuffling to know whether this is a real possibility or not, but it is not likely to make a significant difference: e.g. the difference between 10^67 and 10^40 would be a huge difference mathematically, but it would not be significant for our considerations here, because both are simply too large for us to grasp.

People demonstrably lie at far higher rates than 1 in 10^67 or 1 in 10^40. This will remain the case even if you ask about the rate of “apparently unmotivated flat out lying for no reason.” Consequently, “he’s lying, period,” is far more likely than “the story is true, and happened by pure chance.” Nor can we fix this by pointing to the fact that an extraordinary claim is a kind of extraordinary evidence. In the linked post I said that the case of seeing ghosts, and similar things, might be unclear:

Or in other words, is claiming to have seen a ghost more like claiming to have picked 422,819,208, or is it more like claiming to have picked 500,000,000?

That remains undetermined, at least by the considerations which we have given here. But unless you have good reasons to suspect that seeing ghosts is significantly more rare than claiming to see a ghost, it is misguided to dismiss such claims as requiring some special evidence apart from the claim itself.

In this case there is no such unclarity – if we interpret the claim as “by pure chance the deck ended up in its original order,” then it is precisely like claiming to have picked 500,000,000, except that it is far less likely.

Note that there is some remaining ambiguity. Razaali could defend himself by saying, “I said it happened, I didn’t say it happened by chance.” Or in other words, “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?” But this is simply to point out that “he’s lying” and “this happened by pure chance” are not exhaustive alternatives. And this is true. But if we want to estimate the likelihood of those two alternatives in particular, we must say that it is far more likely that he is lying than that it happened, and happened by chance. And so much so that if one of these alternatives is true, it is virtually certain that he is lying.

As I have said above, the inclination to doubt that such a person is lying primarily has a moral reason. This might lead someone to say that my estimation here also has a moral reason: I just want to form my beliefs in the “correct” way, they might say: it is not about whether Razaali’s story really happened or not.

Charles Taylor, in chapter 15 of A Secular Age, gives a similar explanation of the situation of former religious believers who apparently have lost their faith due to evidence and argument:

From the believer’s perspective, all this falls out rather differently. We start with an epistemic response: the argument from modern science to all-around materialism seems quite unconvincing. Whenever this is worked out in something closer to detail, it seems full of holes. The best examples today might be evolution, sociobiology, and the like. But we also see reasonings of this kind in the works of Richard Dawkins, for instance, or Daniel Dennett.

So the believer returns the compliment. He casts about for an explanation why the materialist is so eager to believe very inconclusive arguments. Here the moral outlook just mentioned comes back in, but in a different role. Not that, failure to rise to which makes you unable to face the facts of materialism; but rather that, whose moral attraction, and seeming plausibility to the facts of the human moral condition, draw you to it, so that you readily grant the materialist argument from science its various leaps of faith. The whole package seems plausible, so we don’t pick too closely at the details.

But how can this be? Surely, the whole package is meant to be plausible precisely because science has shown . . . etc. That’s certainly the way the package of epistemic and moral views presents itself to those who accept it; that’s the official story, as it were. But the supposition here is that the official story isn’t the real one; that the real power that the package has to attract and convince lies in it as a definition of our ethical predicament, in particular, as beings capable of forming beliefs.

This means that this ideal of the courageous acknowledger of unpalatable truths, ready to eschew all easy comfort and consolation, and who by the same token becomes capable of grasping and controlling the world, sits well with us, draws us, that we feel tempted to make it our own. And/or it means that the counter-ideals of belief, devotion, piety, can all-too-easily seem actuated by a still immature desire for consolation, meaning, extra-human sustenance.

What seems to accredit the view of the package as epistemically-driven are all the famous conversion stories, starting with post-Darwinian Victorians but continuing to our day, where people who had a strong faith early in life found that they had reluctantly, even with anguish of soul, to relinquish it, because “Darwin has refuted the Bible”. Surely, we want to say, these people in a sense preferred the Christian outlook morally, but had to bow, with whatever degree of inner pain, to the facts.

But that’s exactly what I’m resisting saying. What happened here was not that a moral outlook bowed to brute facts. Rather we might say that one moral outlook gave way to another. Another model of what was higher triumphed. And much was going for this model: images of power, of untrammelled agency, of spiritual self-possession (the “buffered self”). On the other side, one’s childhood faith had perhaps in many respects remained childish; it was all too easy to come to see it as essentially and constitutionally so.

But this recession of one moral ideal in face of the other is only one aspect of the story. The crucial judgment is an all-in one about the nature of the human ethical predicament: the new moral outlook, the “ethics of belief” in Clifford’s famous phrase, that one should only give credence to what was clearly demonstrated by the evidence, was not only attractive in itself; it also carried with it a view of our ethical predicament, namely, that we are strongly tempted, the more so, the less mature we are, to deviate from this austere principle, and give assent to comforting untruths. The convert to the new ethics has learned to mistrust some of his own deepest instincts, and in particular those which draw him to religious belief. The really operative conversion here was based on the plausibility of this understanding of our ethical situation over the Christian one with its characteristic picture of what entices us to sin and apostasy. The crucial change is in the status accorded to the inclination to believe; this is the object of a radical shift in interpretation. It is no longer the impetus in us towards truth, but has become rather the most dangerous temptation to sin against the austere principles of belief-formation. This whole construal of our ethical predicament becomes more plausible. The attraction of the new moral ideal is only part of this, albeit an important one. What was also crucial was a changed reading of our own motivation, wherein the desire to believe appears now as childish temptation. Since all incipient faith is childish in an obvious sense, and (in the Christian case) only evolves beyond this by being child-like in the Gospel sense, this (mis)reading is not difficult to make.

Taylor’s argument is that the arguments for unbelief are unconvincing; consequently, in order to explain why unbelievers find them convincing, he must find some moral explanation for why they do not believe. This turns out to be the desire to have a particular “ethics of belief”: they do not want to have beliefs which are not formed in such and such a particular way. This is much like the theoretical response above regarding my estimation of the probability that Razaali is lying, and how that might be considered a moral estimation, rather than being concerned with what actually happened.

There are a number of problems with Taylor’s argument, which I may or may not address in the future in more detail. For the moment I will take note of three things:

First, neither in this passage nor elsewhere in the book does Taylor explain in any detailed way why he finds the unbeliever’s arguments unconvincing. I find the arguments convincing, and it is the rebuttals (by others, not by Taylor, since he does not attempt this) that I find unconvincing. Now of course Taylor will say this is because of my particular ethical motivations, but I disagree, and I have considered the matter exactly in the kind of detail to which he refers when he says, “Whenever this is worked out in something closer to detail, it seems full of holes.” On the contrary, the problem of detail is mostly on the other side; most religious views can only make sense when they are not worked out in detail. But this is a topic for another time.

Second, Taylor sets up an implicit dichotomy between his own religious views and “all-around materialism.” But these two claims do not come remotely close to exhausting the possibilities. This is much like forcing someone to choose between “he’s lying” and “this happened by pure chance.” It is obvious in both cases (the deck of cards and religious belief) that the options do not exhaust the possibilities. So insisting on one of them is likely motivated itself: Taylor insists on this dichotomy to make his religious beliefs seem more plausible, using a presumed implausibility of “all-around materialism,” and my hypothetical interlocutor insists on the dichotomy in the hope of persuading me that the deck might have or did randomly end up in its original order, using my presumed unwillingness to accuse someone of lying.

Third, Taylor is not entirely wrong that such an ethical motivation is likely involved in the case of religious belief and unbelief, nor would my hypothetical interlocutor be entirely wrong that such motivations are relevant to our beliefs about the deck of cards.

But we need to consider this point more carefully. Insofar as beliefs are voluntary, you cannot make one side voluntary and the other side involuntary. You cannot say, “Your beliefs are voluntarily adopted due to moral reasons, while my beliefs are imposed on my intellect by the nature of things.” If accepting an opinion is voluntary, rejecting it will also be voluntary, and if rejecting it is voluntary, accepting it will also be voluntary. In this sense, it is quite correct that ethical motivations will always be involved, even when a person’s opinion is actually true, and even when all the reasons that make it likely are fully known. To this degree, I agree that I want to form my beliefs in a way which is prudent and reasonable, and I agree that this desire is partly responsible for my beliefs about religion, and for my above estimate of the chance that Razaali is lying.

But that is not all: my interlocutor (Taylor or the hypothetical one) is also implicitly or explicitly concluding that fundamentally the question is not about truth. Basically, they say, I want to have “correctly formed” beliefs, but this has nothing to do with the real truth of the matter. Sure, I might feel forced to believe that Razaali’s story isn’t true, but there really is no reason it couldn’t be true. And likewise I might feel forced to believe that Taylor’s religious beliefs are untrue, but there really is no reason they couldn’t be.

And in this respect they are mistaken, not because anything “couldn’t” be true, but because the issue of truth is central, much more so than forming beliefs in an ethical way. Regardless of your ethical motives, if you believe that Razaali’s story is true and happened by pure chance, it is virtually certain that you believe a falsehood. Maybe you are forming this belief in a virtuous way, and maybe you are forming it in a vicious way: but either way, it is utterly false. Either it in fact did not happen, or it in fact did not happen by chance.

We know this, essentially, from the “statistics” of the situation: no matter how many qualifications we add, lies in such situations will be vastly more common than truths. But note that something still seems “unconvincing” here, in the sense of Scott Alexander’s original post: even after “knowing all this,” he finds himself very unwilling to say they are lying. In a discussion with Angra Mainyu, I remarked that our apparently involuntary assessments of things are more like desires than like beliefs:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

In a similar way, because we have the natural desire not to injure people, we will naturally desire not to treat “he is lying” as a fact; that is, we will desire not to believe it. The conclusion that Angra should draw in the case under discussion, according to his position, is that I do not “really believe” that it is more likely that Razaali is lying than that his story is true, because I do feel the force of the desire not to say that he is lying. But I resist that desire, in part because I want to have reasonable beliefs, but most of all because it is false that Razaali’s story is true and happened by chance.

To the degree that this desire feels like a prior probability, and it does feel that way, it is necessarily miscalibrated. But to the degree that this desire remains nonetheless, this reasoning will continue to feel in some sense unconvincing. And it does in fact feel that way to me, even after making the argument, as expected. Very possibly, this is not unrelated to Taylor’s assessment that the argument for unbelief “seems quite unconvincing.” But discussing that in the detail which Taylor omitted is a task for another time.

 

 

Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount of trade, if you desire both truth and other things. Eliezer Yudkowsky argues that self-deception, and therefore also such trade, is either impossible or stupid, depending on how it is understood:

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.”  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

A better attitude to this matter is adopted by Robin Hanson, as for example when he discusses motives for having opinions in a post which we previously considered here. Bryan Caplan has a similar view, discussed here.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.

The Practical Argument for Free Will

Richard Chappell discusses a practical argument for free will:

1) If I don’t have free will, then I can’t choose what to believe.
2) If I can choose what to believe, then I have free will [from 1]
3) If I have free will, then I ought to believe it.
4) If I can choose what to believe, then I ought to believe that I have free will. [from 2,3]
5) I ought, if I can, to choose to believe that I have free will. [restatement of 4]

He remarks in the comments:

I’m taking it as analytic (true by definition) that choice requires free will. If we’re not free, then we can’t choose, can we? We might “reach a conclusion”, much like a computer program does, but we couldn’t choose it.

I understand the word “choice” a bit differently, in that I would say that we are obviously choosing in the ordinary sense of the term, if we consider two options which are possible to us as far as we know, and then make up our minds to do one of them, even if it turned out in some metaphysical sense that we were already guaranteed in advance to do that one. Or in other words, Chappell is discussing determinism vs libertarian free will, apparently ruling out compatibilist free will on linguistic grounds. I don’t merely disagree in the sense that I use language differently, but in the sense that I don’t agree that his usage correspond to the normal English usage. [N.B. I misunderstood Richard here. He explains in the comments.] Since people can easily be led astray by such linguistic confusions, given the relationships between thought and language, I prefer to reformulate the argument:

  1. If I don’t have libertarian free will, then I can’t make an ultimate difference in what I believe that was not determined by some initial conditions.
  2. If I can make an ultimate difference in what I believe that was not determined by some initial conditions, then I have libertarian free will [from 1].
  3. If I have libertarian free will, then it is good to believe that I have it.
  4. If I can make an ultimate difference in my beliefs undetermined by initial conditions, then it is good to believe that I have libertarian free will. [from 2, 3]
  5. It is good, if I can, to make a difference in my beliefs undetermined by initial conditions, such that I believe that I have libertarian free will.

We would have to add that the means that can make such a difference, if any means can, would be choosing to believe that I have libertarian free will.

I have reformulated (3) to speak of what is good, rather than of what one ought to believe, for several reasons. First, in order to avoid confusion about the meaning of “ought”. Second, because the resolution of the argument lies here.

The argument is in fact a good argument as far as it goes. It does give a practical reason to hold the voluntary belief that one has libertarian free will. The problem is that it does not establish that it is better overall to hold this belief, because various factors can contribute to whether an action or belief is a good thing.

We can see this with the following thought experiment:

Either people have libertarian free will or they do not. This is unknown. But God has decreed that people who believe that they have libertarian free will go to hell for eternity, while people who believe that they do not, will go to heaven for eternity.

This is basically like the story of the Alien Implant. Having libertarian free will is like the situation where the black box is predicting your choice, and not having it is like the case where the box is causing your choice. The better thing here is to believe that you do not have libertarian free will, and this is true despite whatever theoretical sense you might have that you are “not responsible” for this belief if it is true, just as it is better not to smoke even if you think that your choice is being caused.

But note that if a person believes that he has libertarian free will, and it turns out to be true, he has some benefit from this, namely the truth. But the evil of going to hell presumably outweighs this benefit. And this reveals the fundamental problem with the argument, namely that we need to weigh the consequences overall. We made the consequences heaven and hell for dramatic effect, but even in the original situation, believing that you have libertarian free will when you do not, has an evil effect, namely believing something false, and potentially many evil effects, namely whatever else follows from this falsehood. This means that in order to determine what is better to believe here, it is necessary to consider the consequences of being mistaken, just as it is in general when one formulates beliefs.

Fairies, Unicorns, Werewolves, and Certain Theories of Richard Dawkins

In A Devil’s Chaplain, Richard Dawkins explains his opposition to religion:

To describe religions as mind viruses is sometimes interpreted as contemptuous or even hostile. It is both. I am often asked why I am so hostile to ‘organized religion’. My first response is that I am not exactly friendly towards disorganized religion either. As a lover of truth, I am suspicious of strongly held beliefs that are unsupported by evidence: fairies, unicorns, werewolves, any of the infinite set of conceivable and unfalsifiable beliefs epitomized by Bertrand Russell’s hypothetical china teapot orbiting the Sun. The reason organized religion merits outright hostility is that, unlike belief in Russell’s teapot, religion is powerful, influential, tax-exempt and systematically passed on to children too young to defend themselves. Children are not compelled to spend their formative years memorizing loony books about teapots. Government-subsidized schools don’t exclude children whose parents prefer the wrong shape of teapot. Teapot-believers don’t stone teapot-unbelievers, teapot-apostates, teapot-heretics and teapot-blasphemers to death. Mothers don’t warn their sons off marrying teapot-shiksas whose parents believe in three teapots rather than one. People who put the milk in first don’t kneecap those who put the tea in first.

We have previously discussed the error of supposing that other people’s beliefs are “unsupported by evidence” in the way that the hypothetical china teapot is unsupported. But the curious thing about this passage is that it carries its own refutation. As Dawkins says, the place of religion in the world is very different from the place of belief in fairies, unicorns, and werewolves. These differences are empirical differences in the real world: it is in the real world that people teach their children about religion, but not about orbiting teapots, or in general even about fairies, unicorns, and werewolves.

The conclusion for Dawkins ought not to be hostility towards religion, then, but rather the conclusion, “These appear to me to be beliefs unsupported by evidence, but this must be a mistaken appearance, since obviously humans relate to these beliefs in very different ways than they do to beliefs unsupported by evidence.”

I would suggest that what is actually happening is that Dawkins is making an abstract argument about what the world should look like given that religions are false, much in the way that P. Edmund Waldstein’s argument for integralism is an abstract argument about what the world should look like given that God has revealed a supernatural end. Both theories simply pay no attention to the real world: in the real world, human beings do not in general know a supernatural end (at least not in the detailed way required by P. Edmund’s theory), and in the real world, human beings do not treat religious beliefs as beliefs unsupported by evidence.

The argument by Dawkins would proceed like this: religions are false. Therefore they are just sets of beliefs that posit numerous concrete claims, like assumptions into heaven, virgin births, and so on, which simply do not correspond to anything at all in the real world. Therefore beliefs in these things should be just like beliefs in other such non-existent things, like fairies, unicorns, and werewolves.

The basic conclusion is false, and Dawkins points out its falsity himself in the above quotation.

Nonetheless, people do not tend to be so wrong that there is nothing right about what they say, and there is some truth in what Dawkins is saying, namely that many religious beliefs do make claims which are wildly far from reality. Rod Dreher hovers around this point:

A Facebook friend posted to his page:

“Shut up! No way – you’re too smart! I’m sorry, that came out wrong…”

The reaction a good friend and Evangelical Christian colleague had when she found out I’m a Catholic.

Priceless.

I had to laugh at that, because it recalled conversations I’ve been part of (alas) back in the 1990s, as a fresh Catholic convert, in which we Catholics wondered among ourselves why any smart people would be Evangelical. After I told a Catholic intellectual friend back in 2006 that I was becoming Orthodox, he said something to the effect of, “You’re too smart for that.”

It’s interesting to contemplate why we religious people who believe things that are rather implausible from a relatively neutral point of view can’t understand how intelligent religious people who believe very different things can possibly hold those opinions. I kept getting into this argument with other conservative Christians when Mitt Romney was running for president. They couldn’t bring themselves to vote for him because he’s a Mormon, and Mormons believe “crazy” things. Well, yes, from an orthodox Christian point of view, their beliefs are outlandish, but come on, we believe, as they do, that the God of all Creation, infinite and beyond time, took the form of a mortal man, suffered, died, arose again, and ascended into heaven — and that our lives on this earth and our lives in eternity depend on uniting ourselves to Him. And we believe that that same God established a sacred covenant with a Semitic desert tribe, and made Himself known to mankind through His words to them. And so forth. And these are only the basic “crazy things” that we believe! Judge Mormons to be incorrect in their theology, fine, but if you think they are somehow intellectually defective for believing the things they do that diverge from Christian orthodoxy, then it is you who are suffering from a defect of the intellectual imagination.

My point is not to say all religious belief is equally irrational, or that it is irrational at all. I don’t believe that. A very great deal depends on the premises from which you begin. Catholics and Orthodox, for example, find it strange that so many Evangelicals believe that holding to the Christian faith requires believing that the Genesis story of a seven-day creation must be taken literally, such that the world is only 7,000 years old, and so forth. But then, we don’t read the Bible as they do. I find it wildly implausible that they believe these things, but I personally know people who are much more intelligent than I am who strongly believe them. I wouldn’t want these folks teaching geology or biology to my kids, but to deny their intelligence would be, well, stupid.

I suspect that Dreher has not completely thought through the consequences of these things, and most likely he would not want to. For example, he presumably thinks that his own Christian beliefs are not irrational at all. So are the Mormon beliefs slightly irrational, or also not irrational at all? If Mormon beliefs are false, they are wildly far off from reality. Surely there is something wrong with beliefs that are wildly far off from reality, even if you do not want to use the particular term “irrational.” And presumably claims that are very distant from reality should not be supported by vast amounts of strong evidence, even if unlike Dawkins you admit that some evidence will support them.