Lies, Religion, and Miscalibrated Priors

In a post from some time ago, Scott Alexander asks why it is so hard to believe that people are lying, even in situations where it should be obvious that they made up the whole story:

The weird thing is, I know all of this. I know that if a community is big enough to include even a few liars, then absent a strong mechanism to stop them those lies should rise to the top. I know that pretty much all of our modern communities are super-Dunbar sized and ought to follow that principle.

And yet my System 1 still refuses to believe that the people in those Reddit threads are liars. It’s actually kind of horrified at the thought, imagining them as their shoulders slump and they glumly say “Well, I guess I didn’t really expect anyone to believe me”. I want to say “No! I believe you! I know you had a weird experience and it must be hard for you, but these things happen, I’m sure you’re a good person!”

If you’re like me, and you want to respond to this post with “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?”, then before you comment take a second to ask why the “they’re lying” theory is so hard to believe. And when you figure it out, tell me, because I really want to know.

The strongest reason for this effect is almost certainly a moral reason. In an earlier post, I discussed St. Thomas’s explanation for why one should give a charitable interpretation to someone’s behavior, and in a follow up, I explained the problem of applying that reasoning to the situation of judging whether a person is lying or not. St. Thomas assumes that the bad consequences of being mistaken about someone’s moral character will be minor, and most of the time this is true. But if we asking the question, “are they telling the truth or are they lying?”, the consequences can sometimes be very serious if we are mistaken.

Whether or not one is correct in making this application, it is not hard to see that this is the principal answer to Scott’s question. It is hard to believe the “they’re lying” theory not because of the probability that they are lying, but because we are unwilling to risk injuring someone with our opinion. This is without doubt a good motive from a moral standpoint.

But if you proceed to take this unwillingness as a sign of the probability that they are telling the truth, this would be a demonstrably miscalibrated probability assignment. Consider a story on Quora which makes a good example of Scott’s point:

I shuffled a deck of cards and got the same order that I started with.

No I am not kidding and its not because I can’t shuffle.

Let me just tell the story of how it happened. I was on a trip to Europe and I bought a pack of playing cards at the airport in Madrid to entertain myself on the flight back to Dallas.

It was about halfway through the flight after I’d watched Pixels twice in a row (That s literally the only reason I even remembered this) And I opened my brand new Real Madrid Playing Cards and I just shuffled them for probably like 30 minutes doing different tricks that I’d learned at school to entertain myself and the little girl sitting next to me also found them to be quite cool.

I then went to look at the other sides of the cards since they all had a picture of the Real Madrid player with the same number on the back. That’s when I realized that they were all in order. I literally flipped through the cards and saw Nacho-Fernandes, Ronaldo, Toni Kroos, Karim Benzema and the rest of the team go by all in the perfect order.

Then a few weeks ago when we randomly started talking about Pixels in AP Statistics I brought up this story and my teacher was absolutely amazed. We did the math and the amount of possibilities when shuffling a deck of cards is 52! Meaning 52 x 51 x 50 x 49 x 48….

There were 8.0658175e+67 different combinations of cards that I could have gotten. And I managed to get the same one twice.

The lack of context here might make us more willing to say that Arman Razaali is lying, compared to Scott’s particular examples. Nonetheless, I think a normal person will feel somewhat unwilling to say, “he’s lying, end of story.” I certainly feel that myself.

It does not take many shuffles to essentially randomize a deck. Consequently if Razaali’s statement that he “shuffled them for probably like 30 minutes” is even approximately true, 1 in 52! is probably a good estimate of the chance of the outcome that he claims, if we assume that it happened by chance. It might be some orders of magnitude less since there might be some possibility of “unshuffling.” I do not know enough about the physical process of shuffling to know whether this is a real possibility or not, but it is not likely to make a significant difference: e.g. the difference between 10^67 and 10^40 would be a huge difference mathematically, but it would not be significant for our considerations here, because both are simply too large for us to grasp.

People demonstrably lie at far higher rates than 1 in 10^67 or 1 in 10^40. This will remain the case even if you ask about the rate of “apparently unmotivated flat out lying for no reason.” Consequently, “he’s lying, period,” is far more likely than “the story is true, and happened by pure chance.” Nor can we fix this by pointing to the fact that an extraordinary claim is a kind of extraordinary evidence. In the linked post I said that the case of seeing ghosts, and similar things, might be unclear:

Or in other words, is claiming to have seen a ghost more like claiming to have picked 422,819,208, or is it more like claiming to have picked 500,000,000?

That remains undetermined, at least by the considerations which we have given here. But unless you have good reasons to suspect that seeing ghosts is significantly more rare than claiming to see a ghost, it is misguided to dismiss such claims as requiring some special evidence apart from the claim itself.

In this case there is no such unclarity – if we interpret the claim as “by pure chance the deck ended up in its original order,” then it is precisely like claiming to have picked 500,000,000, except that it is far less likely.

Note that there is some remaining ambiguity. Razaali could defend himself by saying, “I said it happened, I didn’t say it happened by chance.” Or in other words, “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?” But this is simply to point out that “he’s lying” and “this happened by pure chance” are not exhaustive alternatives. And this is true. But if we want to estimate the likelihood of those two alternatives in particular, we must say that it is far more likely that he is lying than that it happened, and happened by chance. And so much so that if one of these alternatives is true, it is virtually certain that he is lying.

As I have said above, the inclination to doubt that such a person is lying primarily has a moral reason. This might lead someone to say that my estimation here also has a moral reason: I just want to form my beliefs in the “correct” way, they might say: it is not about whether Razaali’s story really happened or not.

Charles Taylor, in chapter 15 of A Secular Age, gives a similar explanation of the situation of former religious believers who apparently have lost their faith due to evidence and argument:

From the believer’s perspective, all this falls out rather differently. We start with an epistemic response: the argument from modern science to all-around materialism seems quite unconvincing. Whenever this is worked out in something closer to detail, it seems full of holes. The best examples today might be evolution, sociobiology, and the like. But we also see reasonings of this kind in the works of Richard Dawkins, for instance, or Daniel Dennett.

So the believer returns the compliment. He casts about for an explanation why the materialist is so eager to believe very inconclusive arguments. Here the moral outlook just mentioned comes back in, but in a different role. Not that, failure to rise to which makes you unable to face the facts of materialism; but rather that, whose moral attraction, and seeming plausibility to the facts of the human moral condition, draw you to it, so that you readily grant the materialist argument from science its various leaps of faith. The whole package seems plausible, so we don’t pick too closely at the details.

But how can this be? Surely, the whole package is meant to be plausible precisely because science has shown . . . etc. That’s certainly the way the package of epistemic and moral views presents itself to those who accept it; that’s the official story, as it were. But the supposition here is that the official story isn’t the real one; that the real power that the package has to attract and convince lies in it as a definition of our ethical predicament, in particular, as beings capable of forming beliefs.

This means that this ideal of the courageous acknowledger of unpalatable truths, ready to eschew all easy comfort and consolation, and who by the same token becomes capable of grasping and controlling the world, sits well with us, draws us, that we feel tempted to make it our own. And/or it means that the counter-ideals of belief, devotion, piety, can all-too-easily seem actuated by a still immature desire for consolation, meaning, extra-human sustenance.

What seems to accredit the view of the package as epistemically-driven are all the famous conversion stories, starting with post-Darwinian Victorians but continuing to our day, where people who had a strong faith early in life found that they had reluctantly, even with anguish of soul, to relinquish it, because “Darwin has refuted the Bible”. Surely, we want to say, these people in a sense preferred the Christian outlook morally, but had to bow, with whatever degree of inner pain, to the facts.

But that’s exactly what I’m resisting saying. What happened here was not that a moral outlook bowed to brute facts. Rather we might say that one moral outlook gave way to another. Another model of what was higher triumphed. And much was going for this model: images of power, of untrammelled agency, of spiritual self-possession (the “buffered self”). On the other side, one’s childhood faith had perhaps in many respects remained childish; it was all too easy to come to see it as essentially and constitutionally so.

But this recession of one moral ideal in face of the other is only one aspect of the story. The crucial judgment is an all-in one about the nature of the human ethical predicament: the new moral outlook, the “ethics of belief” in Clifford’s famous phrase, that one should only give credence to what was clearly demonstrated by the evidence, was not only attractive in itself; it also carried with it a view of our ethical predicament, namely, that we are strongly tempted, the more so, the less mature we are, to deviate from this austere principle, and give assent to comforting untruths. The convert to the new ethics has learned to mistrust some of his own deepest instincts, and in particular those which draw him to religious belief. The really operative conversion here was based on the plausibility of this understanding of our ethical situation over the Christian one with its characteristic picture of what entices us to sin and apostasy. The crucial change is in the status accorded to the inclination to believe; this is the object of a radical shift in interpretation. It is no longer the impetus in us towards truth, but has become rather the most dangerous temptation to sin against the austere principles of belief-formation. This whole construal of our ethical predicament becomes more plausible. The attraction of the new moral ideal is only part of this, albeit an important one. What was also crucial was a changed reading of our own motivation, wherein the desire to believe appears now as childish temptation. Since all incipient faith is childish in an obvious sense, and (in the Christian case) only evolves beyond this by being child-like in the Gospel sense, this (mis)reading is not difficult to make.

Taylor’s argument is that the arguments for unbelief are unconvincing; consequently, in order to explain why unbelievers find them convincing, he must find some moral explanation for why they do not believe. This turns out to be the desire to have a particular “ethics of belief”: they do not want to have beliefs which are not formed in such and such a particular way. This is much like the theoretical response above regarding my estimation of the probability that Razaali is lying, and how that might be considered a moral estimation, rather than being concerned with what actually happened.

There are a number of problems with Taylor’s argument, which I may or may not address in the future in more detail. For the moment I will take note of three things:

First, neither in this passage nor elsewhere in the book does Taylor explain in any detailed way why he finds the unbeliever’s arguments unconvincing. I find the arguments convincing, and it is the rebuttals (by others, not by Taylor, since he does not attempt this) that I find unconvincing. Now of course Taylor will say this is because of my particular ethical motivations, but I disagree, and I have considered the matter exactly in the kind of detail to which he refers when he says, “Whenever this is worked out in something closer to detail, it seems full of holes.” On the contrary, the problem of detail is mostly on the other side; most religious views can only make sense when they are not worked out in detail. But this is a topic for another time.

Second, Taylor sets up an implicit dichotomy between his own religious views and “all-around materialism.” But these two claims do not come remotely close to exhausting the possibilities. This is much like forcing someone to choose between “he’s lying” and “this happened by pure chance.” It is obvious in both cases (the deck of cards and religious belief) that the options do not exhaust the possibilities. So insisting on one of them is likely motivated itself: Taylor insists on this dichotomy to make his religious beliefs seem more plausible, using a presumed implausibility of “all-around materialism,” and my hypothetical interlocutor insists on the dichotomy in the hope of persuading me that the deck might have or did randomly end up in its original order, using my presumed unwillingness to accuse someone of lying.

Third, Taylor is not entirely wrong that such an ethical motivation is likely involved in the case of religious belief and unbelief, nor would my hypothetical interlocutor be entirely wrong that such motivations are relevant to our beliefs about the deck of cards.

But we need to consider this point more carefully. Insofar as beliefs are voluntary, you cannot make one side voluntary and the other side involuntary. You cannot say, “Your beliefs are voluntarily adopted due to moral reasons, while my beliefs are imposed on my intellect by the nature of things.” If accepting an opinion is voluntary, rejecting it will also be voluntary, and if rejecting it is voluntary, accepting it will also be voluntary. In this sense, it is quite correct that ethical motivations will always be involved, even when a person’s opinion is actually true, and even when all the reasons that make it likely are fully known. To this degree, I agree that I want to form my beliefs in a way which is prudent and reasonable, and I agree that this desire is partly responsible for my beliefs about religion, and for my above estimate of the chance that Razaali is lying.

But that is not all: my interlocutor (Taylor or the hypothetical one) is also implicitly or explicitly concluding that fundamentally the question is not about truth. Basically, they say, I want to have “correctly formed” beliefs, but this has nothing to do with the real truth of the matter. Sure, I might feel forced to believe that Razaali’s story isn’t true, but there really is no reason it couldn’t be true. And likewise I might feel forced to believe that Taylor’s religious beliefs are untrue, but there really is no reason they couldn’t be.

And in this respect they are mistaken, not because anything “couldn’t” be true, but because the issue of truth is central, much more so than forming beliefs in an ethical way. Regardless of your ethical motives, if you believe that Razaali’s story is true and happened by pure chance, it is virtually certain that you believe a falsehood. Maybe you are forming this belief in a virtuous way, and maybe you are forming it in a vicious way: but either way, it is utterly false. Either it in fact did not happen, or it in fact did not happen by chance.

We know this, essentially, from the “statistics” of the situation: no matter how many qualifications we add, lies in such situations will be vastly more common than truths. But note that something still seems “unconvincing” here, in the sense of Scott Alexander’s original post: even after “knowing all this,” he finds himself very unwilling to say they are lying. In a discussion with Angra Mainyu, I remarked that our apparently involuntary assessments of things are more like desires than like beliefs:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

In a similar way, because we have the natural desire not to injure people, we will naturally desire not to treat “he is lying” as a fact; that is, we will desire not to believe it. The conclusion that Angra should draw in the case under discussion, according to his position, is that I do not “really believe” that it is more likely that Razaali is lying than that his story is true, because I do feel the force of the desire not to say that he is lying. But I resist that desire, in part because I want to have reasonable beliefs, but most of all because it is false that Razaali’s story is true and happened by chance.

To the degree that this desire feels like a prior probability, and it does feel that way, it is necessarily miscalibrated. But to the degree that this desire remains nonetheless, this reasoning will continue to feel in some sense unconvincing. And it does in fact feel that way to me, even after making the argument, as expected. Very possibly, this is not unrelated to Taylor’s assessment that the argument for unbelief “seems quite unconvincing.” But discussing that in the detail which Taylor omitted is a task for another time.




Predictive Processing

In a sort of curious coincidence, a few days after I published my last few posts, Scott Alexander posted a book review of Andy Clark’s book Surfing Uncertainty. A major theme of my posts was that in a certain sense, a decision consists in the expectation of performing the action decided upon. In a similar way, Andy Clark claims that the human brain does something very similar from moment to moment. Thus he begins chapter 4 of his book:

To surf the waves of sensory stimulation, predicting the present is simply not enough. Instead, we are built to engage the world. We are built to act in ways that are sensitive to the contingencies of the past, and that actively bring forth the futures that we need and desire. How does a guessing engine (a hierarchical prediction machine) turn prediction into accomplishment? The answer that we shall explore is: by predicting the shape of its own motor trajectories. In accounting for action, we thus move from predicting the rolling present to predicting the near-future, in the form of the not-yet-actual trajectories of our own limbs and bodies. These trajectories, predictive processing suggests, are specified by their distinctive sensory (especially proprioceptive) consequences. In ways that we are about to explore, predicting these (non-actual) sensory states actually serves to bring them about.

Such predictions act as self-fulfilling prophecies. Expecting the flow of sensation that would result were you to move your body so as to keep the surfboard in that rolling sweet spot results (if you happen to be an expert surfer) in that very flow, locating the surfboard right where you want it. Expert prediction of the world (here, the dynamic ever-changing waves) combines with expert prediction of the sensory flow that would, in that context, characterize the desired action, so as to bring that action about.

There is a great deal that could be said about the book, and about this theory, but for the moment I will content myself with remarking on one of Scott Alexander’s complaints about the book, and making one additional point. In his review, Scott remarks:

In particular, he’s obsessed with showing how “embodied” everything is all the time. This gets kind of awkward, since the predictive processing model isn’t really a natural match for embodiment theory, and describes a brain which is pretty embodied in some ways but not-so-embodied in others. If you want a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”, this is your book.

I did not find Clark obsessed with this, and I think it would be hard to reasonably describe any hundred pages in the book as devoted to this particular topic. This inclines to me to suggest that Scott may be irritated by such discussion of the topic that comes up because it does not seem relevant to him. I will therefore explain the relevance, namely in relation to a different difficulty which Scott discusses in another post:

There’s something more interesting in Section 7.10 of Surfing Uncertainty [actually 8.10], “Escape From The Darkened Room”. It asks: if the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”.

Section 7.10 [8.10] gives a kind of hand-wave-y answer here, saying that of course organisms have some drives, and probably it makes sense for them to desire novelty and explore new options, and so on. Overall this isn’t too different from PCT’s idea of “intrinsic error”, and as long as we remember that it’s not really predicting anything in particular it seems like a fair response.

Clark’s response may be somewhat “hand-wave-y,” but I think the response might seem slightly more problematic to Scott than it actually is, precisely because he does not understand the idea of embodiment, and how it applies to this situation.

If we think about predictions on a general intellectual level, there is a good reason not to predict that you will not eat something soon. If you do predict this, you will turn out to be wrong, as is often discovered by would-be adopters of extreme fasts or diets. You will in fact eat something soon, regardless of what you think about this; so if you want the truth, you should believe that you will eat something soon.

The “darkened room” problem, however, is not about this general level. The argument is that if the brain is predicting its actions from moment to moment on a subconscious level, then if its main concern is getting accurate predictions, it could just predict an absence of action, and carry this out, and its predictions would be accurate. So why does this not happen? Clark gives his “hand-wave-y” answer:

Prediction-error-based neural processing is, we have seen, part of a potent recipe for multi-scale self-organization. Such multiscale self-organization does not occur in a vacuum. Instead, it operates only against the backdrop of an evolved organismic (neural and gross-bodily) form, and (as we will see in chapter 9) an equally transformative backdrop of slowly accumulated material structure and cultural practices: the socio-technological legacy of generation upon generation of human learning and experience.

To start to bring this larger picture into focus, the first point to notice is that explicit, fast timescale processes of prediction error minimization must answer to the needs and projects of evolved, embodied, and environmentally embedded agents. The very existence of such agents (see Friston, 2011b, 2012c) thus already implies a huge range of structurally implicit creature-specific ‘expectations’. Such creatures are built to seek mates, to avoid hunger and thirst, and to engage (even when not hungry and thirsty) in the kinds of sporadic environmental exploration that will help prepare them for unexpected environmental shifts, resource scarcities, new competitors, and so on. On a moment-by-moment basis, then, prediction error is minimized only against the backdrop of this complex set of creature-defining ‘expectations’.”

In one way, the answer here is a historical one. If you simply ask the abstract question, “would it minimize prediction error to predict doing nothing, and then to do nothing,” perhaps it would. But evolution could not bring such a creature into existence, while it was able to produce a creature that would predict that it would engage the world in various ways, and then would proceed to engage the world in those ways.

The objection, of course, would not be that the creature of the “darkened room” is possible. The objection would be that since such a creature is not possible, it must be wrong to describe the brain as minimizing prediction error. But notice that if you predict that you will not eat, and then you do not eat, you are no more right or wrong than if you predict that you will eat, and then you do eat. Either one is possible from the standpoint of prediction, but only one is possible from the standpoint of history.

This is where being “embodied” is relevant. The brain is not an abstract algorithm which has no content except to minimize prediction error; it is a physical object which works together in physical ways with the rest of the human body to carry out specifically human actions and to live a human life.

On the largest scale of evolutionary history, there were surely organisms that were nourished and reproduced long before there was anything analagous to a mind at work in those organisms. So when mind began to be, and took over some of this process, this could only happen in such a way that it would continue the work that was already there. A “predictive engine” could only begin to be by predicting that nourishment and reproduction would continue, since any attempt to do otherwise would necessarily result either in false predictions or in death.

This response is necessarily “hand-wave-y” in the sense that I (and presumably Clark) do not understand the precise physical implementation. But it is easy to see that it was historically necessary for things to happen this way, and it is an expression of “embodiment” in the sense that “minimize prediction error” is an abstract algorithm which does not and cannot exhaust everything which is there. The objection would be, “then there must be some other algorithm instead.” But this does not follow: no abstract algorithm will exhaust a physical object. Thus for example, animals will fall because they are heavy. Asking whether falling will satisfy some abstract algorithm is not relevant. In a similar way, animals had to be physically arranged in such a way that they would usually eat and reproduce.

I said I would make one additional point, although it may well be related to the above concern. In section 4.8 Clark notes that his account does not need to consider costs and benefits, at least directly:

But the story does not stop there. For the very same strategy here applies to the notion of desired consequences and rewards at all levels. Thus we read that ‘crucially, active inference does not invoke any “desired consequences”. It rests only on experience-dependent learning and inference: experience induces prior expectations, which guide perceptual inference and action’ (Friston, Mattout, & Kilner, 2011, p. 157). Apart from a certain efflorescence of corollary discharge, in the form of downward-flowing predictions, we here seem to confront something of a desert landscape: a world in which value functions, costs, reward signals, and perhaps even desires have been replaced by complex interacting expectations that inform perception and entrain action. But we could equally say (and I think this is the better way to express the point) that the functions of rewards and cost functions are now simply absorbed into a more complex generative model. They are implicit in our sensory (especially proprioceptive) expectations and they constrain behavior by prescribing their distinctive sensory implications.

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.

Wishful Thinking about Wishful Thinking

Cameron Harwick discusses an apparent relationship between “New Atheism” and group selection:

Richard Dawkins’ best-known scientific achievement is popularizing the theory of gene-level selection in his book The Selfish Gene. Gene-level selection stands apart from both traditional individual-level selection and group-level selection as an explanation for human cooperation. Steven Pinker, similarly, wrote a long article on the “false allure” of group selection and is an outspoken critic of the idea.

Dawkins and Pinker are also both New Atheists, whose characteristic feature is not only a disbelief in religious claims, but an intense hostility to religion in general. Dawkins is even better known for his popular books with titles like The God Delusion, and Pinker is a board member of the Freedom From Religion Foundation.

By contrast, David Sloan Wilson, a proponent of group selection but also an atheist, is much more conciliatory to the idea of religion: even if its factual claims are false, the institution is probably adaptive and beneficial.

Unrelated as these two questions might seem – the arcane scientific dispute on the validity of group selection, and one’s feelings toward religion – the two actually bear very strongly on one another in practice.

After some discussion of the scientific issue, Harwick explains the relationship he sees between these two questions:

Why would Pinker argue that human self-sacrifice isn’t genuine, contrary to introspection, everyday experience, and the consensus in cognitive science?

To admit group selection, for Pinker, is to admit the genuineness of human altruism. Barring some very strange argument, to admit the genuineness of human altruism is to admit the adaptiveness of genuine altruism and broad self-sacrifice. And to admit the adaptiveness of broad self-sacrifice is to admit the adaptiveness of those human institutions that coordinate and reinforce it – namely, religion!

By denying the conceptual validity of anything but gene-level selection, therefore, Pinker and Dawkins are able to brush aside the evidence on religion’s enabling role in the emergence of large-scale human cooperation, and conceive of it as merely the manipulation of the masses by a disingenuous and power-hungry elite – or, worse, a memetic virus that spreads itself to the detriment of its practicing hosts.

In this sense, the New Atheist’s fundamental axiom is irrepressibly religious: what is true must be useful, and what is false cannot be useful. But why should anyone familiar with evolutionary theory think this is the case?

As another example of the tendency Cameron Harwick is discussing, we can consider this post by Eliezer Yudkowsky:

Perhaps the real reason that evolutionary “just-so stories” got a bad name is that so many attempted stories are prima facie absurdities to serious students of the field.

As an example, consider a hypothesis I’ve heard a few times (though I didn’t manage to dig up an example).  The one says:  Where does religion come from?  It appears to be a human universal, and to have its own emotion backing it – the emotion of religious faith.  Religion often involves costly sacrifices, even in hunter-gatherer tribes – why does it persist?  What selection pressure could there possibly be for religion?

So, the one concludes, religion must have evolved because it bound tribes closer together, and enabled them to defeat other tribes that didn’t have religion.

This, of course, is a group selection argument – an individual sacrifice for a group benefit – and see the referenced posts if you’re not familiar with the math, simulations, and observations which show that group selection arguments are extremely difficult to make work.  For example, a 3% individual fitness sacrifice which doubles the fitness of the tribe will fail to rise to universality, even under unrealistically liberal assumptions, if the tribe size is as large as fifty.  Tribes would need to have no more than 5 members if the individual fitness cost were 10%.  You can see at a glance from the sex ratio in human births that, in humans, individual selection pressures overwhelmingly dominate group selection pressures.  This is an example of what I mean by prima facie absurdity.

It does not take much imagination to see that religion could have “evolved because it bound tribes closer together” without group selection in a technical sense having anything to do with this process. But I will not belabor this point, since Eliezer’s own answer regarding the origin of religion does not exactly keep his own feelings hidden:

So why religion, then?

Well, it might just be a side effect of our ability to do things like model other minds, which enables us to conceive of disembodied minds.  Faith, as an emotion, might just be co-opted hope.

But if faith is a true religious adaptation, I don’t see why it’s even puzzling what the selection pressure could have been.

Heretics were routinely burned alive just a few centuries ago.  Or stoned to death, or executed by whatever method local fashion demands.  Questioning the local gods is the notional crime for which Socrates was made to drink hemlock.

Conversely, Huckabee just won Iowa’s nomination for tribal-chieftain.

Why would you need to go anywhere near the accursèd territory of group selectionism in order to provide an evolutionary explanation for religious faith?  Aren’t the individual selection pressures obvious?

I don’t know whether to suppose that (1) people are mapping the question onto the “clash of civilizations” issue in current affairs, (2) people want to make religion out to have some kind of nicey-nice group benefit (though exterminating other tribes isn’t very nice), or (3) when people get evolutionary hypotheses wrong, they just naturally tend to get it wrong by postulating group selection.

Let me give my own extremely credible just-so story: Eliezer Yudkowsky wrote this not fundamentally to make a point about group selection, but because he hates religion, and cannot stand the idea that it might have some benefits. It is easy to see this from his use of language like “nicey-nice,” and his suggestion that the main selection pressure in favor of religion would be likely to be something like being burned at the stake, or that it might just have been a “side effect,” that is, that there was no advantage to it.

But as St. Paul says, “Therefore you have no excuse, whoever you are, when you judge others; for in passing judgment on another you condemn yourself, because you, the judge, are doing the very same things.” Yudkowsky believes that religion is just wishful thinking. But his belief that religion therefore cannot be useful is itself nothing but wishful thinking. In reality religion can be useful just as voluntary beliefs in general can be useful.

Zeal for God, But Not According to Knowledge

St. Thomas raises this objection to the existence of God:

Objection 2. Further, it is superfluous to suppose that what can be accounted for by a few principles has been produced by many. But it seems that everything we see in the world can be accounted for by other principles, supposing God did not exist. For all natural things can be reduced to one principle which is nature; and all voluntary things can be reduced to one principle which is human reason, or will. Therefore there is no need to suppose God’s existence.

He responds to the objection:

Since nature works for a determinate end under the direction of a higher agent, whatever is done by nature must needs be traced back to God, as to its first cause. So also whatever is done voluntarily must also be traced back to some higher cause other than human reason or will, since these can change or fail; for all things that are changeable and capable of defect must be traced back to an immovable and self-necessary first principle, as was shown in the body of the Article.

The explanation here is that things do have their own proper causes, but these proper causes do not have the properties necessary to be a first cause. Likewise, the very distinction of these proper causes from one another shows that they must be reduced to a one single principle.

This response is correct, but it is difficult for people to understand. People tend to assume that the objection is fundamentally valid, given its premises. Thus many atheists believe that they have a very good argument for their atheism, and many theists assume that there must be falsehood in the premises. And the ordinary way to assume this is to say that we do see things in the world that cannot be accounted for by other principles.

This leads to an undue zeal on behalf of God, of the sort mentioned in the previous post. There is the desire to say that something was done by God, and only by God; not by anything else. In this way the premise that “everything we see in the world can be accounted for by other principles” would turn out to be false. The Intelligent Design movement provides an example of this desire. The linked Wikipedia article approaches this with a very polemical point of view, but I am not concerned here with the scientific issues. It is very evident, in any case, that there is the idea here that it would be good to prove that something was done by God alone, and not by any secondary causes. In this way people are jealous on behalf of God: if it turns out that it was done by secondary causes, that takes something from God, and in particular it makes it less likely that God exists.

The truth is mostly the opposite of this. Although nothing can be taken from God, the purposes of creation are better obtained if created things contribute whatever they can to the production of other things. Thus the world is more ordered, and so more perfect simply speaking.

As an example, consider the case of the origin of life. Unlike the process which gave rise to the origin of species, abiogenesis is not an established fact. What would be best, were it the case? I do not speak of the truth of the matter, nor what we might wish to believe about it, but which thing would be better in itself: is it better if life arises from non-living things, or is it better if life is directly created by God? For someone jealous for God in this way, it seems better if life were directly created, in order better to prove that God exists. In reality, however, it is better if life comes to be in a certain order, with a contribution from non-living things, to whatever degree that this is possible.

This is not just a matter of wishful thinking, in one direction or the other, although that can be involved. Rather, in cases of this kind, the fact that one thing is better is an argument, although not a conclusive one, for its reality.

There are many other ways in which this kind of undue zeal influences human opinions, and recognition of the truth of this matter has many consequences. But for the moment we are on another path.

If At First You Don’t Succeed

Suppose you have a dozen problems in your life that you are trying to solve. And suppose that whenever you try to solve one of them, you almost always fail. Is there a chance that a time will come when you have solved them all?

There is such a chance, of course. You almost always fail, but if you continue to try other possible solutions, you might hit on a solution sooner or later. And then you will have only 11 issues remaining, and you can continue from there, working on the next one.

And even after more or less resolving one problem, you might later discover a still better solution. Thus for example I discussed a certain solution to time management here, but my current solution is substantially better, although including important elements of that one.

In a similar way, I discussed the general idea of progress in the posts here, here, and here. A very simple summary of the ideas argued there is that people are trying to make things better for themselves and others, and even if they do not always succeed, they sometimes do. And for the reason assigned above in this post, you do not have to succeed in solving your problems all of the time, or even most of the time, in order to generally make progress.

In economics, there is a similar reason for the fact that markets do as well as they do, and in biology, why natural selection works as well as it does, despite the fact that a majority of individual changes either do nothing or are actively harmful.


My Morals and Your Morals

The last two posts have explained the changeableness in ethics as a result of the nature of the moral object, and as a result of evolution and human nature in the concrete. Still a third kind of flexibility results from individual differences.

Aristotle, as we saw, affirms that happiness and virtue consist in performing well the function of man. So insofar as people have human nature in common, their happiness and virtue will be the same. One might suppose that it follows that human happiness and virtue must be entirely the same in all, but this is a mistake. For the nature of virtue in the concrete follows not only from an abstract idea of a “rational animal,” but from the condition of the human animal taken much more concretely. This follows from the last post, where we saw that moral principles, even ones which we currently understand to be universal principles, could have been otherwise, had the circumstances of the human race been otherwise.

One might respond that this makes no difference, since all of us are members of the human race in the concrete, and consequently we must share the same concrete virtue and happiness. This does follow to some extent, just as does the general argument that all humans possess human nature. But it does not follow perfectly.

It does not follow perfectly, that is, it does not follow that our virtue and happiness is the same in every respect. If ethics were simply a logical deduction from an abstract idea like that of “rational animal,” then one might reasonably suppose that virtue and happiness would be entirely the same in all. But in fact ethics also results from facts that are intrinsically changeable, namely facts about what promotes the flourishing of the human race.

Although these facts are intrinsically changeable, one will not expect them to change from person to person in a random manner. It is not that for some, killing the innocent is harmful for human flourishing, while in others, it is beneficial. Instead, it is harmful for all.

But the fact that we are speaking of intrinsically changeable things does mean that we will have a certain amount of variation from one individual to another. There are facts about human beings that result in moral norms. But these “facts about human beings” may vary, e.g. in degree, from one human to another. Alexander Pruss, discussing the origin of Bayesian priors, makes this remark:

Let me try to soften you up in favor of anthropocentrism about priors with an ethics analogy. If sharks developed rationality, we wouldn’t expect their flourishing to involve quite as much friendship as our flourishing does. Autonomy and friendship are both of value, and yet are in tension, and we would expect different species to resolve that tension differently based on the different ways that they are characteristically adapted to their environment. This is, indeed, an argument for a significant Natural Law component in ethics: even if values are kind-independent, the appropriate resolution of tensions between them is something that may well be relative to a kind.

But just as sharks would have less need for friendship than human beings have, so one human being might have less need for friendship than another.

Aristotle discusses virtue as consisting as a mean between opposed vices:

Since, then, the present inquiry does not aim at theoretical knowledge like the others (for we are inquiring not in order to know what virtue is, but in order to become good, since otherwise our inquiry would have been of no use), we must examine the nature of actions, namely how we ought to do them; for these determine also the nature of the states of character that are produced, as we have said. Now, that we must act according to the right rule is a common principle and must be assumed-it will be discussed later, i.e. both what the right rule is, and how it is related to the other virtues. But this must be agreed upon beforehand, that the whole account of matters of conduct must be given in outline and not precisely, as we said at the very beginning that the accounts we demand must be in accordance with the subject-matter; matters concerned with conduct and questions of what is good for us have no fixity, any more than matters of health. The general account being of this nature, the account of particular cases is yet more lacking in exactness; for they do not fall under any art or precept but the agents themselves must in each case consider what is appropriate to the occasion, as happens also in the art of medicine or of navigation.

But though our present account is of this nature we must give what help we can. First, then, let us consider this, that it is the nature of such things to be destroyed by defect and excess, as we see in the case of strength and of health (for to gain light on things imperceptible we must use the evidence of sensible things); both excessive and defective exercise destroys the strength, and similarly drink or food which is above or below a certain amount destroys the health, while that which is proportionate both produces and increases and preserves it. So too is it, then, in the case of temperance and courage and the other virtues. For the man who flies from and fears everything and does not stand his ground against anything becomes a coward, and the man who fears nothing at all but goes to meet every danger becomes rash; and similarly the man who indulges in every pleasure and abstains from none becomes self-indulgent, while the man who shuns every pleasure, as boors do, becomes in a way insensible; temperance and courage, then, are destroyed by excess and defect, and preserved by the mean.

But not only are the sources and causes of their origination and growth the same as those of their destruction, but also the sphere of their actualization will be the same; for this is also true of the things which are more evident to sense, e.g. of strength; it is produced by taking much food and undergoing much exertion, and it is the strong man that will be most able to do these things. So too is it with the virtues; by abstaining from pleasures we become temperate, and it is when we have become so that we are most able to abstain from them; and similarly too in the case of courage; for by being habituated to despise things that are terrible and to stand our ground against them we become brave, and it is when we have become so that we shall be most able to stand our ground against them.

Aristotle may be making more or less the same point as this post (and the previous two) when he says that “matters concerned with conduct and questions of what is good for us have no fixity, any more than matters of health,” and likewise when he says that “the agents themselves must in each case consider what is appropriate to the occasion.” Virtue consists in a mean, not too much of something and not too little. But where exactly this mean falls will differ from one individual to another. The case of friendship mentioned above is an example. As Pruss says, “Autonomy and friendship are both of value, and yet are in tension,” and since those values will affect different people differently, we can expect differently people rightly to resolve that tension in different ways, just as Pruss says we could expect different species to resolve it differently. Naturally, we might expect the difference between species to be greater than the difference between individuals. But there will be differences in each case.

So in order to arrive at the mean of truth, there are two opposite errors to be avoided here. One is the Equality Dogma. The other would be the supposition that the differences between individuals might be more or less the same as differences between species. Ian Morris, in his book Why the West Rules–for Now, remarks,

This technical debate over classifying prehistoric skeletons has potentially alarming implications. Racists are often eager to pounce on such details to justify prejudice, violence, and even genocide. You might feel that taking the time to talk about a theory of this kind merely dignifies bigotry; perhaps we should just ignore it. But that, I think, would be a mistake. Pronouncing racist theories contemptible is not enough. If we really want to reject them, and to conclude that people (in large groups) really are all much the same, it must be because racist theories are wrong, not just because most of us today do not like them.

One of the arguments of the book (best understood by reading the book) is that “people (in large groups) really are all much the same,” and that the causes of the differences between West and East were not primarily differences between peoples, but differences of other kinds such as differences of geography.


Morality and Evolution

Some days ago, I stated that ethics is more flexible than many people suppose. One reason for this is the nature of the moral object. I tried to explain how this works in the last post; more detail is found in the comments there. A second reason is that human nature itself is less fixed than many people suppose. This follows from the theory of evolution.

This issue is related to a post by Alexander Pruss, where he raises the question of why immoral behavior is not necessary for human flourishing:

Andrea Dworkin argued that sexual intercourse between a man and a woman is always wrong because it involves a violation of the woman’s bodily integrity. She concluded that until recent advances in medical technology, it was impossible for humans to permissibly reproduce. The antinatalists, on the other hand, continue to hold that it is impossible for humans to permissible reproduce. Such views lead to an incredulous stare. It is very tempting to levy against them an argument like this:

  1. Coital reproduction is necessary for the minimal flourishing of the human community under normal conditions.
  2. Whatever is necessary for the minimal flourishing of the human community under normal conditions is sometimes permissible.
  3. Coital reproduction is sometimes permissible.

The condition “under normal conditions” is needed for (2) to be plausible. We can, after all, easily imagine science-fictional scenarios where something immoral would need to be done to ensure the minimal flourishing of the human community.

Reproduction is not the only case where issues like this come up. For instance, the destruction of non-human organisms, say plants, seems necessary for our flourishing. And I suspect that under normal conditions the killing of non-human animals is necessary, too (if only as a side-effect of plowing fields, say). Taxation may be another interesting example.

I have heard it argued that (2) is in itself a basic moral principle, so that killing non-human animals as a side-effect of vegan farming is permissible because it is permissible to ensure minimal human flourishing. But that seems mistaken. Rather, while (2) is true, it is not a moral principle, but a consequence of a correlation between (a) fundamental facts about what moral duties there are actually are and (b) facts about what is actually needed for minimal human flourishing under normal conditions.

This leads to an interesting and I think somewhat underexplored question: Why are the moral facts and the facts about actual human needs so correlated as to make (2) true?

Theists have an elegant answer to this question: God had very strong moral reason to make humans in such a way that, at least normally, minimal flourishing of the community doesn’t require wrong action. Non-theists have other stories to tell. These stories, however, are likely to be piecemeal. For instance, one will give one evolutionary story about why we and our ecosystem evolved in such a way that eating persons wasn’t needed for our species’ survival, and another about why we evolved in such a way that morally non-degrading sex sufficed for reproduction. But a unified answer is to be preferred over piecemeal answers, especially when the unified answer is compatible with the piecemeal ones and capable of integrating them into a single story. We do, thus, get some evidence for theism here.

I tend to agree with Pruss here on a certain level. Thus I have argued myself that the fact that the world is good implies that its principle is good. However, his argument is more particular than that. He is claiming that principle (2) could have failed to be true empirically , and consequently that there was a need of some special effort to make sure that it did not fail to be true. He is presumably not rejecting the theory of evolution, but he is arguing that God needed to take special care to ensure that evolution did not follow certain paths where (2) would have ended up being false.

In contrast, I would argue that (2) could not possibly have failed to be true. This follows from an Aristotelian view of ethics and from the nature of moral obligation. Virtue simply means those habits that lead to human flourishing, and moral obligations are simply those things which are necessary for the human good. So it is evident that there was no need for any special measures to prevent immorality from being necessary for human flourishing. Whatever was necessary would have been moral.

Pruss gives examples: why isn’t eating persons necessary for survival, and why isn’t morally degrading sex necessary for reproduction? (In the comments he gives rape as an example of morally degrading sex.) As Pruss points out, a reasonable evolutionary account can be given for each thing of this kind. Generally speaking, prey populations must significantly outnumber predator populations for stability, and this implies that even if it is possible for some species to prey on itself to some extent, as in cannibalism, it is not likely to be necessary; most of the nourishment must come from elsewhere.

Similarly, given the nature of rationality, it would be highly unlikely for lack of consent to be necessary for a reproductive process between two individuals. One could imagine its necessity: perhaps reproduction only happens when hormones are present in the blood which are only emitted in circumstances of distress and unwillingness. But the fact that this might be possible in principle does not make it a likely thing to evolve; to the extent that a rational party is unwilling to reproduce, reproduction is unlikely to happen at all. So this kind of situation is likely to lead either to extinction, or to a new situation where lack of consent is no longer necessary.

Pruss’s response is that “a unified answer is to be preferred over piecemeal answers.” But this only works if it is in fact true that (2) would have been false if eating persons had been necessary for survival, or if lack of consent had been necessary for reproduction.

I would respond to Pruss in two ways. First, as I have already stated, (2) could not have been false, and would not have been false even in Pruss’s imaginary scenarios. Second, human life as it actually is has properties which directly suggest that no special effort has been taken to avoid such things. These two claims might seem inconsistent. I will explain their consistency when I come to the second point.

Regarding the first point, suppose eating persons were necessary for the survival of the human race. Let’s say that when someone reached the age of 15, it was necessary for him to eat an older person or die of a fatal disease. This would be part of the human growth process.

It is obvious, and Pruss concedes that it is true, that if this were the case, all humans would agree that it was morally acceptable for the 15 year old children to eat the older adults. This would presumably have some concrete social arrangement, perhaps with the very oldest being eaten. They might not like the fact, but even the ones being eaten would presumably accept the necessity of the situation, and in most cases consent to it. Pruss simply claims that despite the fact that all humans would agree that the behavior was moral, it would be objectively wrong.

This seems to me to deserve the “incredulous stare” that positions like Andrea Dworkin’s and the antinatalists’ receive. What could even be meant by the supposed objective wrongness in that situation? And if there is such a thing, perhaps many things that we do in everyday life are objectively wrong as well, and we simply don’t know it, in the same way those people would not.

Again, consider the idea that non-consensual sex might have been necessary for reproduction. This situation seems even more unlikely than the previous, for the reasons given above, but given that it were an actual situation, again, virtually all humans (possibly with exceptions like Andrea Dworkin) would agree that reproduction was moral. Lack of consent would no more make reproduction immoral, in that situation, than the fact that children do not consent to much of the treatment they receive from their parents means that raising children is immoral.

This point is in fact a good transition to my second claim. Human life requires that children receive a good deal of treatment to which they do not consent, and with which they often strongly disagree. There is nothing great about this situation, but it is inevitable. And this kind of point illustrates my claim that no special effort has been taken to avoid such situations. If eating people had been necessary for survival, or non-consensual sex had been necessary for reproduction, we might very well have recognized that these things were unfortunate necessities, but we would not have concluded that they were immoral, and in fact they would not have been immoral, given those circumstances.

We could find other examples of “unfortunate” situations in human life as it is:

1) Breastfeeding tends to space births by preventing conception. But there is some evidence that occasionally it can cause an abortion, or at least contribute to causing one. Alan McNeilly says regarding this point:

The foregoing discussion has made it clear that suckling is the key to the suppression of fertility. The variable return of ovarian activity is related to the variable pattern of suckling input and how fast the baby feeds. It is known that conception rates in women who are still breastfeeding but have resumed menstrual cycles are lower than those in women who have resumed menstruation after stopping contraception. The reason for this has now become clear. When ovulation occurs during lactation, it is often associated with reduced or inadequate corpus luteum function, resulting in reduced progesterone secretion [23-25]. The implication is that conception in a number of cycles can occur, but inadequate luteal function prevents continuation of the pregnancy.

Some people would argue that this definitely cannot happen, using an argument somewhat analogous to Pruss’s own argument that God makes sure to avoid such unfortunate situations. Thus someone says on the Catholic Answers forum,

You have to be careful about the crazy things that are put out there. Where did you read this?

Surely you’re not suggesting that breastfeeding is a sin?

Breastfeeding does NOT hinder implantation. It really wouldn’t make sense for G-d to give us the ability to lactate for which to feed our children while potentially destroying fertilized eggs by preventing their implantation

Naturally, nothing is settled by this argument. But the commenter here is right about one thing: we already know that breastfeeding is not immoral. And that fact is not going to change, not even if we discover that it frequently causes abortions.

2) The headship of the man in a family is arguably necessary for human flourishing, or at least was in the past, but the resulting subjection of the woman seems somewhat unfortunate, even though (by my own argument) not wicked. Even the book of Genesis suggests that something is not quite right there, by making it a consequence of original sin.

3) Religion and philosophy are arguably necessary for human flourishing. But it is difficult to know the truth about these matters, and humans tend to hold positions regarding them for social reasons. And if we suppose that we personally possess some part of the truth about these matters, it follows that most of those in the past were substantially mistaken about them, given the extent of human disagreement in such matters. This is not merely a question of lacking the good of truth. Rather, the fact that people do not naturally care much about that truth seems to be an unfortunate moral situation, much like the imaginary situations invented by Pruss.

Finally, we can consider one more imaginary situation. Suppose that the real world turned out to be like the world of Horton Hears a Who! Suppose that every time you took a step, hundreds of tiny rational creatures were killed. No normal human would lie down and die after discovering this fact. Pruss, I think, would assert that it would be the right thing to do, but he would be in a tiny minority. Most people would change nothing, and I would agree with them. I would respond that 1) the situation would not change the moral object of any human action, which would mean that anything we are justified in doing now, we would remain justified in doing; and 2) the population comparison involved implies a vastly higher economic value to normal human beings, which would imply that we would remain justified in living normal human lives even after considering the secondary consequences of our behavior.

The arguments of this post imply that in principle morality could have been somewhat different, depending on the details of how human life evolved. But the arguments imply not only that it could have been different, but that it remains changeable in some ways, because the process of evolution does not come to an end, since it is a necessary result of imperfect copies. Naturally, this kind of change should be expected to take place mainly over very long periods of time, but this will not necessarily prevent it from happening.