Lies, Religion, and Miscalibrated Priors

In a post from some time ago, Scott Alexander asks why it is so hard to believe that people are lying, even in situations where it should be obvious that they made up the whole story:

The weird thing is, I know all of this. I know that if a community is big enough to include even a few liars, then absent a strong mechanism to stop them those lies should rise to the top. I know that pretty much all of our modern communities are super-Dunbar sized and ought to follow that principle.

And yet my System 1 still refuses to believe that the people in those Reddit threads are liars. It’s actually kind of horrified at the thought, imagining them as their shoulders slump and they glumly say “Well, I guess I didn’t really expect anyone to believe me”. I want to say “No! I believe you! I know you had a weird experience and it must be hard for you, but these things happen, I’m sure you’re a good person!”

If you’re like me, and you want to respond to this post with “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?”, then before you comment take a second to ask why the “they’re lying” theory is so hard to believe. And when you figure it out, tell me, because I really want to know.

The strongest reason for this effect is almost certainly a moral reason. In an earlier post, I discussed St. Thomas’s explanation for why one should give a charitable interpretation to someone’s behavior, and in a follow up, I explained the problem of applying that reasoning to the situation of judging whether a person is lying or not. St. Thomas assumes that the bad consequences of being mistaken about someone’s moral character will be minor, and most of the time this is true. But if we asking the question, “are they telling the truth or are they lying?”, the consequences can sometimes be very serious if we are mistaken.

Whether or not one is correct in making this application, it is not hard to see that this is the principal answer to Scott’s question. It is hard to believe the “they’re lying” theory not because of the probability that they are lying, but because we are unwilling to risk injuring someone with our opinion. This is without doubt a good motive from a moral standpoint.

But if you proceed to take this unwillingness as a sign of the probability that they are telling the truth, this would be a demonstrably miscalibrated probability assignment. Consider a story on Quora which makes a good example of Scott’s point:

I shuffled a deck of cards and got the same order that I started with.

No I am not kidding and its not because I can’t shuffle.

Let me just tell the story of how it happened. I was on a trip to Europe and I bought a pack of playing cards at the airport in Madrid to entertain myself on the flight back to Dallas.

It was about halfway through the flight after I’d watched Pixels twice in a row (That s literally the only reason I even remembered this) And I opened my brand new Real Madrid Playing Cards and I just shuffled them for probably like 30 minutes doing different tricks that I’d learned at school to entertain myself and the little girl sitting next to me also found them to be quite cool.

I then went to look at the other sides of the cards since they all had a picture of the Real Madrid player with the same number on the back. That’s when I realized that they were all in order. I literally flipped through the cards and saw Nacho-Fernandes, Ronaldo, Toni Kroos, Karim Benzema and the rest of the team go by all in the perfect order.

Then a few weeks ago when we randomly started talking about Pixels in AP Statistics I brought up this story and my teacher was absolutely amazed. We did the math and the amount of possibilities when shuffling a deck of cards is 52! Meaning 52 x 51 x 50 x 49 x 48….

There were 8.0658175e+67 different combinations of cards that I could have gotten. And I managed to get the same one twice.

The lack of context here might make us more willing to say that Arman Razaali is lying, compared to Scott’s particular examples. Nonetheless, I think a normal person will feel somewhat unwilling to say, “he’s lying, end of story.” I certainly feel that myself.

It does not take many shuffles to essentially randomize a deck. Consequently if Razaali’s statement that he “shuffled them for probably like 30 minutes” is even approximately true, 1 in 52! is probably a good estimate of the chance of the outcome that he claims, if we assume that it happened by chance. It might be some orders of magnitude less since there might be some possibility of “unshuffling.” I do not know enough about the physical process of shuffling to know whether this is a real possibility or not, but it is not likely to make a significant difference: e.g. the difference between 10^67 and 10^40 would be a huge difference mathematically, but it would not be significant for our considerations here, because both are simply too large for us to grasp.

People demonstrably lie at far higher rates than 1 in 10^67 or 1 in 10^40. This will remain the case even if you ask about the rate of “apparently unmotivated flat out lying for no reason.” Consequently, “he’s lying, period,” is far more likely than “the story is true, and happened by pure chance.” Nor can we fix this by pointing to the fact that an extraordinary claim is a kind of extraordinary evidence. In the linked post I said that the case of seeing ghosts, and similar things, might be unclear:

Or in other words, is claiming to have seen a ghost more like claiming to have picked 422,819,208, or is it more like claiming to have picked 500,000,000?

That remains undetermined, at least by the considerations which we have given here. But unless you have good reasons to suspect that seeing ghosts is significantly more rare than claiming to see a ghost, it is misguided to dismiss such claims as requiring some special evidence apart from the claim itself.

In this case there is no such unclarity – if we interpret the claim as “by pure chance the deck ended up in its original order,” then it is precisely like claiming to have picked 500,000,000, except that it is far less likely.

Note that there is some remaining ambiguity. Razaali could defend himself by saying, “I said it happened, I didn’t say it happened by chance.” Or in other words, “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?” But this is simply to point out that “he’s lying” and “this happened by pure chance” are not exhaustive alternatives. And this is true. But if we want to estimate the likelihood of those two alternatives in particular, we must say that it is far more likely that he is lying than that it happened, and happened by chance. And so much so that if one of these alternatives is true, it is virtually certain that he is lying.

As I have said above, the inclination to doubt that such a person is lying primarily has a moral reason. This might lead someone to say that my estimation here also has a moral reason: I just want to form my beliefs in the “correct” way, they might say: it is not about whether Razaali’s story really happened or not.

Charles Taylor, in chapter 15 of A Secular Age, gives a similar explanation of the situation of former religious believers who apparently have lost their faith due to evidence and argument:

From the believer’s perspective, all this falls out rather differently. We start with an epistemic response: the argument from modern science to all-around materialism seems quite unconvincing. Whenever this is worked out in something closer to detail, it seems full of holes. The best examples today might be evolution, sociobiology, and the like. But we also see reasonings of this kind in the works of Richard Dawkins, for instance, or Daniel Dennett.

So the believer returns the compliment. He casts about for an explanation why the materialist is so eager to believe very inconclusive arguments. Here the moral outlook just mentioned comes back in, but in a different role. Not that, failure to rise to which makes you unable to face the facts of materialism; but rather that, whose moral attraction, and seeming plausibility to the facts of the human moral condition, draw you to it, so that you readily grant the materialist argument from science its various leaps of faith. The whole package seems plausible, so we don’t pick too closely at the details.

But how can this be? Surely, the whole package is meant to be plausible precisely because science has shown . . . etc. That’s certainly the way the package of epistemic and moral views presents itself to those who accept it; that’s the official story, as it were. But the supposition here is that the official story isn’t the real one; that the real power that the package has to attract and convince lies in it as a definition of our ethical predicament, in particular, as beings capable of forming beliefs.

This means that this ideal of the courageous acknowledger of unpalatable truths, ready to eschew all easy comfort and consolation, and who by the same token becomes capable of grasping and controlling the world, sits well with us, draws us, that we feel tempted to make it our own. And/or it means that the counter-ideals of belief, devotion, piety, can all-too-easily seem actuated by a still immature desire for consolation, meaning, extra-human sustenance.

What seems to accredit the view of the package as epistemically-driven are all the famous conversion stories, starting with post-Darwinian Victorians but continuing to our day, where people who had a strong faith early in life found that they had reluctantly, even with anguish of soul, to relinquish it, because “Darwin has refuted the Bible”. Surely, we want to say, these people in a sense preferred the Christian outlook morally, but had to bow, with whatever degree of inner pain, to the facts.

But that’s exactly what I’m resisting saying. What happened here was not that a moral outlook bowed to brute facts. Rather we might say that one moral outlook gave way to another. Another model of what was higher triumphed. And much was going for this model: images of power, of untrammelled agency, of spiritual self-possession (the “buffered self”). On the other side, one’s childhood faith had perhaps in many respects remained childish; it was all too easy to come to see it as essentially and constitutionally so.

But this recession of one moral ideal in face of the other is only one aspect of the story. The crucial judgment is an all-in one about the nature of the human ethical predicament: the new moral outlook, the “ethics of belief” in Clifford’s famous phrase, that one should only give credence to what was clearly demonstrated by the evidence, was not only attractive in itself; it also carried with it a view of our ethical predicament, namely, that we are strongly tempted, the more so, the less mature we are, to deviate from this austere principle, and give assent to comforting untruths. The convert to the new ethics has learned to mistrust some of his own deepest instincts, and in particular those which draw him to religious belief. The really operative conversion here was based on the plausibility of this understanding of our ethical situation over the Christian one with its characteristic picture of what entices us to sin and apostasy. The crucial change is in the status accorded to the inclination to believe; this is the object of a radical shift in interpretation. It is no longer the impetus in us towards truth, but has become rather the most dangerous temptation to sin against the austere principles of belief-formation. This whole construal of our ethical predicament becomes more plausible. The attraction of the new moral ideal is only part of this, albeit an important one. What was also crucial was a changed reading of our own motivation, wherein the desire to believe appears now as childish temptation. Since all incipient faith is childish in an obvious sense, and (in the Christian case) only evolves beyond this by being child-like in the Gospel sense, this (mis)reading is not difficult to make.

Taylor’s argument is that the arguments for unbelief are unconvincing; consequently, in order to explain why unbelievers find them convincing, he must find some moral explanation for why they do not believe. This turns out to be the desire to have a particular “ethics of belief”: they do not want to have beliefs which are not formed in such and such a particular way. This is much like the theoretical response above regarding my estimation of the probability that Razaali is lying, and how that might be considered a moral estimation, rather than being concerned with what actually happened.

There are a number of problems with Taylor’s argument, which I may or may not address in the future in more detail. For the moment I will take note of three things:

First, neither in this passage nor elsewhere in the book does Taylor explain in any detailed way why he finds the unbeliever’s arguments unconvincing. I find the arguments convincing, and it is the rebuttals (by others, not by Taylor, since he does not attempt this) that I find unconvincing. Now of course Taylor will say this is because of my particular ethical motivations, but I disagree, and I have considered the matter exactly in the kind of detail to which he refers when he says, “Whenever this is worked out in something closer to detail, it seems full of holes.” On the contrary, the problem of detail is mostly on the other side; most religious views can only make sense when they are not worked out in detail. But this is a topic for another time.

Second, Taylor sets up an implicit dichotomy between his own religious views and “all-around materialism.” But these two claims do not come remotely close to exhausting the possibilities. This is much like forcing someone to choose between “he’s lying” and “this happened by pure chance.” It is obvious in both cases (the deck of cards and religious belief) that the options do not exhaust the possibilities. So insisting on one of them is likely motivated itself: Taylor insists on this dichotomy to make his religious beliefs seem more plausible, using a presumed implausibility of “all-around materialism,” and my hypothetical interlocutor insists on the dichotomy in the hope of persuading me that the deck might have or did randomly end up in its original order, using my presumed unwillingness to accuse someone of lying.

Third, Taylor is not entirely wrong that such an ethical motivation is likely involved in the case of religious belief and unbelief, nor would my hypothetical interlocutor be entirely wrong that such motivations are relevant to our beliefs about the deck of cards.

But we need to consider this point more carefully. Insofar as beliefs are voluntary, you cannot make one side voluntary and the other side involuntary. You cannot say, “Your beliefs are voluntarily adopted due to moral reasons, while my beliefs are imposed on my intellect by the nature of things.” If accepting an opinion is voluntary, rejecting it will also be voluntary, and if rejecting it is voluntary, accepting it will also be voluntary. In this sense, it is quite correct that ethical motivations will always be involved, even when a person’s opinion is actually true, and even when all the reasons that make it likely are fully known. To this degree, I agree that I want to form my beliefs in a way which is prudent and reasonable, and I agree that this desire is partly responsible for my beliefs about religion, and for my above estimate of the chance that Razaali is lying.

But that is not all: my interlocutor (Taylor or the hypothetical one) is also implicitly or explicitly concluding that fundamentally the question is not about truth. Basically, they say, I want to have “correctly formed” beliefs, but this has nothing to do with the real truth of the matter. Sure, I might feel forced to believe that Razaali’s story isn’t true, but there really is no reason it couldn’t be true. And likewise I might feel forced to believe that Taylor’s religious beliefs are untrue, but there really is no reason they couldn’t be.

And in this respect they are mistaken, not because anything “couldn’t” be true, but because the issue of truth is central, much more so than forming beliefs in an ethical way. Regardless of your ethical motives, if you believe that Razaali’s story is true and happened by pure chance, it is virtually certain that you believe a falsehood. Maybe you are forming this belief in a virtuous way, and maybe you are forming it in a vicious way: but either way, it is utterly false. Either it in fact did not happen, or it in fact did not happen by chance.

We know this, essentially, from the “statistics” of the situation: no matter how many qualifications we add, lies in such situations will be vastly more common than truths. But note that something still seems “unconvincing” here, in the sense of Scott Alexander’s original post: even after “knowing all this,” he finds himself very unwilling to say they are lying. In a discussion with Angra Mainyu, I remarked that our apparently involuntary assessments of things are more like desires than like beliefs:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

In a similar way, because we have the natural desire not to injure people, we will naturally desire not to treat “he is lying” as a fact; that is, we will desire not to believe it. The conclusion that Angra should draw in the case under discussion, according to his position, is that I do not “really believe” that it is more likely that Razaali is lying than that his story is true, because I do feel the force of the desire not to say that he is lying. But I resist that desire, in part because I want to have reasonable beliefs, but most of all because it is false that Razaali’s story is true and happened by chance.

To the degree that this desire feels like a prior probability, and it does feel that way, it is necessarily miscalibrated. But to the degree that this desire remains nonetheless, this reasoning will continue to feel in some sense unconvincing. And it does in fact feel that way to me, even after making the argument, as expected. Very possibly, this is not unrelated to Taylor’s assessment that the argument for unbelief “seems quite unconvincing.” But discussing that in the detail which Taylor omitted is a task for another time.




Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount of trade, if you desire both truth and other things. Eliezer Yudkowsky argues that self-deception, and therefore also such trade, is either impossible or stupid, depending on how it is understood:

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.”  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

A better attitude to this matter is adopted by Robin Hanson, as for example when he discusses motives for having opinions in a post which we previously considered here. Bryan Caplan has a similar view, discussed here.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.

The Practical Argument for Free Will

Richard Chappell discusses a practical argument for free will:

1) If I don’t have free will, then I can’t choose what to believe.
2) If I can choose what to believe, then I have free will [from 1]
3) If I have free will, then I ought to believe it.
4) If I can choose what to believe, then I ought to believe that I have free will. [from 2,3]
5) I ought, if I can, to choose to believe that I have free will. [restatement of 4]

He remarks in the comments:

I’m taking it as analytic (true by definition) that choice requires free will. If we’re not free, then we can’t choose, can we? We might “reach a conclusion”, much like a computer program does, but we couldn’t choose it.

I understand the word “choice” a bit differently, in that I would say that we are obviously choosing in the ordinary sense of the term, if we consider two options which are possible to us as far as we know, and then make up our minds to do one of them, even if it turned out in some metaphysical sense that we were already guaranteed in advance to do that one. Or in other words, Chappell is discussing determinism vs libertarian free will, apparently ruling out compatibilist free will on linguistic grounds. I don’t merely disagree in the sense that I use language differently, but in the sense that I don’t agree that his usage correspond to the normal English usage. [N.B. I misunderstood Richard here. He explains in the comments.] Since people can easily be led astray by such linguistic confusions, given the relationships between thought and language, I prefer to reformulate the argument:

  1. If I don’t have libertarian free will, then I can’t make an ultimate difference in what I believe that was not determined by some initial conditions.
  2. If I can make an ultimate difference in what I believe that was not determined by some initial conditions, then I have libertarian free will [from 1].
  3. If I have libertarian free will, then it is good to believe that I have it.
  4. If I can make an ultimate difference in my beliefs undetermined by initial conditions, then it is good to believe that I have libertarian free will. [from 2, 3]
  5. It is good, if I can, to make a difference in my beliefs undetermined by initial conditions, such that I believe that I have libertarian free will.

We would have to add that the means that can make such a difference, if any means can, would be choosing to believe that I have libertarian free will.

I have reformulated (3) to speak of what is good, rather than of what one ought to believe, for several reasons. First, in order to avoid confusion about the meaning of “ought”. Second, because the resolution of the argument lies here.

The argument is in fact a good argument as far as it goes. It does give a practical reason to hold the voluntary belief that one has libertarian free will. The problem is that it does not establish that it is better overall to hold this belief, because various factors can contribute to whether an action or belief is a good thing.

We can see this with the following thought experiment:

Either people have libertarian free will or they do not. This is unknown. But God has decreed that people who believe that they have libertarian free will go to hell for eternity, while people who believe that they do not, will go to heaven for eternity.

This is basically like the story of the Alien Implant. Having libertarian free will is like the situation where the black box is predicting your choice, and not having it is like the case where the box is causing your choice. The better thing here is to believe that you do not have libertarian free will, and this is true despite whatever theoretical sense you might have that you are “not responsible” for this belief if it is true, just as it is better not to smoke even if you think that your choice is being caused.

But note that if a person believes that he has libertarian free will, and it turns out to be true, he has some benefit from this, namely the truth. But the evil of going to hell presumably outweighs this benefit. And this reveals the fundamental problem with the argument, namely that we need to weigh the consequences overall. We made the consequences heaven and hell for dramatic effect, but even in the original situation, believing that you have libertarian free will when you do not, has an evil effect, namely believing something false, and potentially many evil effects, namely whatever else follows from this falsehood. This means that in order to determine what is better to believe here, it is necessary to consider the consequences of being mistaken, just as it is in general when one formulates beliefs.

Fairies, Unicorns, Werewolves, and Certain Theories of Richard Dawkins

In A Devil’s Chaplain, Richard Dawkins explains his opposition to religion:

To describe religions as mind viruses is sometimes interpreted as contemptuous or even hostile. It is both. I am often asked why I am so hostile to ‘organized religion’. My first response is that I am not exactly friendly towards disorganized religion either. As a lover of truth, I am suspicious of strongly held beliefs that are unsupported by evidence: fairies, unicorns, werewolves, any of the infinite set of conceivable and unfalsifiable beliefs epitomized by Bertrand Russell’s hypothetical china teapot orbiting the Sun. The reason organized religion merits outright hostility is that, unlike belief in Russell’s teapot, religion is powerful, influential, tax-exempt and systematically passed on to children too young to defend themselves. Children are not compelled to spend their formative years memorizing loony books about teapots. Government-subsidized schools don’t exclude children whose parents prefer the wrong shape of teapot. Teapot-believers don’t stone teapot-unbelievers, teapot-apostates, teapot-heretics and teapot-blasphemers to death. Mothers don’t warn their sons off marrying teapot-shiksas whose parents believe in three teapots rather than one. People who put the milk in first don’t kneecap those who put the tea in first.

We have previously discussed the error of supposing that other people’s beliefs are “unsupported by evidence” in the way that the hypothetical china teapot is unsupported. But the curious thing about this passage is that it carries its own refutation. As Dawkins says, the place of religion in the world is very different from the place of belief in fairies, unicorns, and werewolves. These differences are empirical differences in the real world: it is in the real world that people teach their children about religion, but not about orbiting teapots, or in general even about fairies, unicorns, and werewolves.

The conclusion for Dawkins ought not to be hostility towards religion, then, but rather the conclusion, “These appear to me to be beliefs unsupported by evidence, but this must be a mistaken appearance, since obviously humans relate to these beliefs in very different ways than they do to beliefs unsupported by evidence.”

I would suggest that what is actually happening is that Dawkins is making an abstract argument about what the world should look like given that religions are false, much in the way that P. Edmund Waldstein’s argument for integralism is an abstract argument about what the world should look like given that God has revealed a supernatural end. Both theories simply pay no attention to the real world: in the real world, human beings do not in general know a supernatural end (at least not in the detailed way required by P. Edmund’s theory), and in the real world, human beings do not treat religious beliefs as beliefs unsupported by evidence.

The argument by Dawkins would proceed like this: religions are false. Therefore they are just sets of beliefs that posit numerous concrete claims, like assumptions into heaven, virgin births, and so on, which simply do not correspond to anything at all in the real world. Therefore beliefs in these things should be just like beliefs in other such non-existent things, like fairies, unicorns, and werewolves.

The basic conclusion is false, and Dawkins points out its falsity himself in the above quotation.

Nonetheless, people do not tend to be so wrong that there is nothing right about what they say, and there is some truth in what Dawkins is saying, namely that many religious beliefs do make claims which are wildly far from reality. Rod Dreher hovers around this point:

A Facebook friend posted to his page:

“Shut up! No way – you’re too smart! I’m sorry, that came out wrong…”

The reaction a good friend and Evangelical Christian colleague had when she found out I’m a Catholic.


I had to laugh at that, because it recalled conversations I’ve been part of (alas) back in the 1990s, as a fresh Catholic convert, in which we Catholics wondered among ourselves why any smart people would be Evangelical. After I told a Catholic intellectual friend back in 2006 that I was becoming Orthodox, he said something to the effect of, “You’re too smart for that.”

It’s interesting to contemplate why we religious people who believe things that are rather implausible from a relatively neutral point of view can’t understand how intelligent religious people who believe very different things can possibly hold those opinions. I kept getting into this argument with other conservative Christians when Mitt Romney was running for president. They couldn’t bring themselves to vote for him because he’s a Mormon, and Mormons believe “crazy” things. Well, yes, from an orthodox Christian point of view, their beliefs are outlandish, but come on, we believe, as they do, that the God of all Creation, infinite and beyond time, took the form of a mortal man, suffered, died, arose again, and ascended into heaven — and that our lives on this earth and our lives in eternity depend on uniting ourselves to Him. And we believe that that same God established a sacred covenant with a Semitic desert tribe, and made Himself known to mankind through His words to them. And so forth. And these are only the basic “crazy things” that we believe! Judge Mormons to be incorrect in their theology, fine, but if you think they are somehow intellectually defective for believing the things they do that diverge from Christian orthodoxy, then it is you who are suffering from a defect of the intellectual imagination.

My point is not to say all religious belief is equally irrational, or that it is irrational at all. I don’t believe that. A very great deal depends on the premises from which you begin. Catholics and Orthodox, for example, find it strange that so many Evangelicals believe that holding to the Christian faith requires believing that the Genesis story of a seven-day creation must be taken literally, such that the world is only 7,000 years old, and so forth. But then, we don’t read the Bible as they do. I find it wildly implausible that they believe these things, but I personally know people who are much more intelligent than I am who strongly believe them. I wouldn’t want these folks teaching geology or biology to my kids, but to deny their intelligence would be, well, stupid.

I suspect that Dreher has not completely thought through the consequences of these things, and most likely he would not want to. For example, he presumably thinks that his own Christian beliefs are not irrational at all. So are the Mormon beliefs slightly irrational, or also not irrational at all? If Mormon beliefs are false, they are wildly far off from reality. Surely there is something wrong with beliefs that are wildly far off from reality, even if you do not want to use the particular term “irrational.” And presumably claims that are very distant from reality should not be supported by vast amounts of strong evidence, even if unlike Dawkins you admit that some evidence will support them.

Patience, Truth, and Progress

If the Jehovah’s Witnesses are impatient with respect to truth, why do they nonetheless manage to advance in the knowledge of truth?

Our story about Peter, as a morality tale, is a bit more absolute than reality often is. Disordered behavior will more often than not produce disordered consequences, but the details will vary from case to case. Most cases of impatient driving do not in fact result in death, and most of the time the driver will still in fact get to his destination. There may however be other bad consequences, as the unnecessary annoyance and inconvenience posed to other drivers, the growth of the driver’s bad driving habits, and so on.

In a similar way, impatience with respect to truth will tend to have bad consequences, but the details will vary from case to case. In most cases those consequences may include detrimental effects relative to the knowledge of truth, but they will not necessarily completely impede the knowledge of truth, just as bad driving does not necessarily prevent one from reaching the destination.

In the case of the Witnesses, we noted that their progress seems laughably slow. It would be reasonable to attribute this slowness to the impatience in question, while the general fact of progress can be attributed to the general causes of such progress.

Impatiently adopting an excessively detailed view will slow a person’s advance in truth in a number of ways. In the first place, such a view will very likely be false, just as the detailed predictions of the Witnesses turned out to be false. And falsehood of course impedes the knowledge of truth first by excluding the truth opposite to the falsehood. Likewise, falsehood impedes the knowledge of truth in other ways, because when we learn anything, we learn it in the context of everything else that we know. Insofar as what we think we know includes some things that are untrue, these untrue aspects will tend to distort our view of the new things that we are learning.

Second, there are particular effects of jumping to untrue conclusions that are excessively detailed. Suppose I say, “There will be a nuclear war beginning on March 3rd, 2017.” If I claim to possess a high level of confidence about this, then I must claim an even higher level of confidence that there will be a major war in 2017, and a still higher confidence that there will be one or more major disasters in the next few years. This is for the reason discussed some days ago, namely that the more general claims must be more known and more certain, and as a matter of probability theory, the numerical probability assigned to the more general claim must be higher than that assigned to the more specific claim.

Now suppose, as is likely, that no nuclear war begins on March 3rd, 2017. What will I conclude? It might be reasonable, in some sense, for me to conclude that I was mistaken about the more general things as well, and not only about the date of March 3rd. But I was very, very confident about the more general things, significantly more so than about the date of March 3rd. And given that my original assignment of March 3rd proceeded from impatience for specific knowledge, a more likely result is that I will now say that the war will begin on September 17th, or something like that. And even after this does not happen, I will be quite likely to say, “Well, maybe I was wrong about the details. But there is still likely to be a major war before the end of the year, or anyway in the next few years.” And this will be because of my greater certainty about the more general claims. And this greater certainty itself arose from my impatience for specific knowledge, not from a careful analysis of the facts.

Developing a False Doctrine

As documented here by Paul Grundy, the Jehovah’s Witnesses repeatedly made claims about the end of the world or other apocalyptic events, predicting that they would happen on specific dates. Thus for example they said in 1894, “But bear in mind that the end of 1914 is not the date for the beginning, but for the end of the time of trouble.”And again in 1920:

What, then, should we expect to take place? The chief thing to be restored is the human race to life; and since other Scriptures definitely fix the fact that there will be a resurrection of Abraham, Isaac, Jacob and other faithful ones of old, and that these will have the first favour, we may expect 1925 to witness the return of these faithful men of Israel from the condition of death , being resurrected and fully restored to perfect humanity and made the visible, legal representatives of the new order of things on earth.

Needless to say, these things did not happen. These are only a few examples of false predictions made by the Jehovah’s Witnesses.

To most people, this process seems ridiculous, and to some extent it is. Nonetheless, in a surprising way it is an example of progress in truth. With each failed prophecy, the Witnesses learn something new: first they learn that the world will not end in 1914, then they learn that it will not end or be remarkably restored in 1925, and so on.

The reason this seems ridiculous is that we believe that they should be learning something more. They should be learning that it is false that “apocalyptic events will soon take place and it is within our power to determine in advance their specific timing.” And yes, it would be reasonable for them to learn this. But even if they do not, they are still learning something.

Why do they persist in making the claim that apocalyptic events will soon take place, and that they can determine their timing in advance, even after each particular case is falsified? This is related to our previous post. Their general claim, precisely insofar as it is general, is necessarily more likely, and so “more known”, as it were, than each of the specific predictions. It is as if one were to see something in the distance and to believe, “it is a man,” but then upon getting a bit closer, one says, “wait, it doesn’t look quite like a man, it must be an ape.” The more general belief that it is an animal persists.

The Witnesses may be advancing in truth more slowly than we think that they should, but they are advancing. And ultimately there is no reason to expect this to end with the learning of particulars alone. In fact, towards the end of his article, Grundy says, “Toward the end of the twentieth century, the Watchtower Society refrained from issuing specific dates for Armageddon, but still has not stopped implying dates and time frames.” In other words, they continue to maintain that “apocalyptic events will soon take place,” but they are beginning to conclude that it is untrue that “we can determine their specific timing in advance.” Once again, this is because the claim that apocalyptic events will soon take place is necessarily more likely and “more known” than the combined claim that such events will take place and that one can determine their timing in advance.

The More Known and the Conjunction Fallacy

St. Thomas explains in what sense we know the universal before the particular, and in what sense the particular before the universal:

In our knowledge there are two things to be considered.

First, that intellectual knowledge in some degree arises from sensible knowledge: and, because sense has singular and individual things for its object, and intellect has the universal for its object, it follows that our knowledge of the former comes before our knowledge of the latter.

Secondly, we must consider that our intellect proceeds from a state of potentiality to a state of actuality; and every power thus proceeding from potentiality to actuality comes first to an incomplete act, which is the medium between potentiality and actuality, before accomplishing the perfect act. The perfect act of the intellect is complete knowledge, when the object is distinctly and determinately known; whereas the incomplete act is imperfect knowledge, when the object is known indistinctly, and as it were confusedly. A thing thus imperfectly known, is known partly in act and partly in potentiality, and hence the Philosopher says (Phys. i, 1), that “what is manifest and certain is known to us at first confusedly; afterwards we know it by distinguishing its principles and elements.” Now it is evident that to know an object that comprises many things, without proper knowledge of each thing contained in it, is to know that thing confusedly. In this way we can have knowledge not only of the universal whole, which contains parts potentially, but also of the integral whole; for each whole can be known confusedly, without its parts being known. But to know distinctly what is contained in the universal whole is to know the less common, as to “animal” indistinctly is to know it as “animal”; whereas to know “animal” distinctly is know it as “rational” or “irrational animal,” that is, to know a man or a lion: therefore our intellect knows “animal” before it knows man; and the same reason holds in comparing any more universal idea with the less universal.

Moreover, as sense, like the intellect, proceeds from potentiality to act, the same order of knowledge appears in the senses. For by sense we judge of the more common before the less common, in reference both to place and time; in reference to place, when a thing is seen afar off it is seen to be a body before it is seen to be an animal; and to be an animal before it is seen to be a man, and to be a man before it seen to be Socrates or Plato; and the same is true as regards time, for a child can distinguish man from not man before he distinguishes this man from that, and therefore “children at first call men fathers, and later on distinguish each one from the others” (Phys. i, 1). The reason of this is clear: because he who knows a thing indistinctly is in a state of potentiality as regards its principle of distinction; as he who knows “genus” is in a state of potentiality as regards “difference.” Thus it is evident that indistinct knowledge is midway between potentiality and act.

We must therefore conclude that knowledge of the singular and individual is prior, as regards us, to the knowledge of the universal; as sensible knowledge is prior to intellectual knowledge. But in both sense and intellect the knowledge of the more common precedes the knowledge of the less common.

The universal is known from the particular in the sense that we learn the nature of the universal from the experience of particulars. But both in regard to the universal and in regard to the particular, our knowledge is first vague and confused, and becomes more distinct as it is perfected. In St. Thomas’s example, one can see that something is a body before noticing that it is an animal, and an animal before noticing that it is a man. The thing that might be confusing here is that the more certain knowledge is also the less perfect knowledge: looking at the thing in the distance, it is more certain that it is some kind of body, but it is more perfect to know that it is a man.

Insofar as probability theory is a formalization of degrees of belief, the same thing is found, and the same confusion can occur. Objectively, the more general claim should always be understood to be more probable, but the more specific claim, representing what would be more perfect knowledge, can seem more explanatory, and therefore might appear more likely. This false appearance is known as the conjunction fallacy. Thus for example as I continue to add to a blog post, the post might become more convincing. But in fact the chance that I am making a serious error in the post can only increase, not decrease, with every additional sentence.