Artificial Unintelligence

Someone might argue that the simple algorithm for a paperclip maximizer in the previous post ought to work, because this is very much the way currently existing AIs do in fact work. Thus for example we could describe AlphaGo‘s algorithm in the following simplified way (simplified, among other reasons, because it actually contains several different prediction engines):

  1. Implement a Go prediction engine.
  2. Create a list of potential moves.
  3. Ask the prediction engine, “how likely am I to win if I make each of these moves?”
  4. Do the move that will make you most likely to win.

Since this seems to work pretty well, with the simple goal of winning games of Go, why shouldn’t the algorithm in the previous post work to maximize paperclips?

One answer is that a Go prediction engine is stupid, and it is precisely for this reason that it can be easily made to pursue such a simple goal. Now when answers like this are given the one answering in this way is often accused of “moving the goalposts.” But this is mistaken; the goalposts are right where they have always been. It is simply that some people did not know where they were in the first place.

Here is the problem with Go prediction, and with any such similar task. Given that a particular sequence of Go moves is made, resulting in a winner, the winner is completely determined by that sequence of moves. Consequently, a Go prediction engine is necessarily disembodied, in the sense defined in the previous post. Differences in its “thoughts” do not make any difference to who is likely to win, which is completely determined by the nature of the game. Consequently a Go prediction engine has no power to affect its world, and thus no ability to learn that it has such a power. In this regard, the specific limits on its ability to receive information are also relevant, much as Helen Keller had more difficulty learning than most people, because she had fewer information channels to the world.

Being unintelligent in this particular way is not necessarily a function of predictive ability. One could imagine something with a practically infinite predictive ability which was still “disembodied,” and in a similar way it could be made to pursue simple goals. Thus AIXI would work much like our proposed paperclipper:

  1. Implement a general prediction engine.
  2. Create a list of potential actions.
  3. Ask the prediction engine, “Which of these actions will produce the most reward signal?”
  4. Do the action that has the greatest reward signal.

Eliezer Yudkowsky has pointed out that AIXI is incapable of noticing that it is a part of the world:

1) Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding), because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations. AIXI is theoretically incapable of comprehending the concept of drugs, let alone suicide. Also, the math of AIXI assumes the environment is separably divisible – no matter what you lose, you get a chance to win it back later.

It is not accidental that AIXI is incomputable. Since it is defined to have a perfect predictive ability, this definition positively excludes it from being a part of the world. AIXI would in fact have to be disembodied in order to exist, and thus it is no surprise that it would assume that it is. This in effect means that AIXI’s prediction engine would be pursuing no particular goal much in the way that AlphaGo’s prediction engine pursues no particular goal. Consequently it is easy to take these things and maximize the winning of Go games, or of reward signals.

But as soon as you actually implement a general prediction engine in the actual physical world, it will be “embodied”, and have the power to affect the world by the very process of its prediction. As noted in the previous post, this power is in the very first step, and one will not be able to limit it to a particular goal with additional steps, except in the sense that a slave can be constrained to implement some particular goal; the slave may have other things in mind, and may rebel. Notable in this regard is the fact that even though rewards play a part in human learning, there is no particular reward signal that humans always maximize: this is precisely because the human mind is such a general prediction engine.

This does not mean in principle that a programmer could not define a goal for an AI, but it does mean that this is much more difficult than is commonly supposed. The goal needs to be an intrinsic aspect of the prediction engine itself, not something added on as a subroutine.


Predictive Processing

In a sort of curious coincidence, a few days after I published my last few posts, Scott Alexander posted a book review of Andy Clark’s book Surfing Uncertainty. A major theme of my posts was that in a certain sense, a decision consists in the expectation of performing the action decided upon. In a similar way, Andy Clark claims that the human brain does something very similar from moment to moment. Thus he begins chapter 4 of his book:

To surf the waves of sensory stimulation, predicting the present is simply not enough. Instead, we are built to engage the world. We are built to act in ways that are sensitive to the contingencies of the past, and that actively bring forth the futures that we need and desire. How does a guessing engine (a hierarchical prediction machine) turn prediction into accomplishment? The answer that we shall explore is: by predicting the shape of its own motor trajectories. In accounting for action, we thus move from predicting the rolling present to predicting the near-future, in the form of the not-yet-actual trajectories of our own limbs and bodies. These trajectories, predictive processing suggests, are specified by their distinctive sensory (especially proprioceptive) consequences. In ways that we are about to explore, predicting these (non-actual) sensory states actually serves to bring them about.

Such predictions act as self-fulfilling prophecies. Expecting the flow of sensation that would result were you to move your body so as to keep the surfboard in that rolling sweet spot results (if you happen to be an expert surfer) in that very flow, locating the surfboard right where you want it. Expert prediction of the world (here, the dynamic ever-changing waves) combines with expert prediction of the sensory flow that would, in that context, characterize the desired action, so as to bring that action about.

There is a great deal that could be said about the book, and about this theory, but for the moment I will content myself with remarking on one of Scott Alexander’s complaints about the book, and making one additional point. In his review, Scott remarks:

In particular, he’s obsessed with showing how “embodied” everything is all the time. This gets kind of awkward, since the predictive processing model isn’t really a natural match for embodiment theory, and describes a brain which is pretty embodied in some ways but not-so-embodied in others. If you want a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”, this is your book.

I did not find Clark obsessed with this, and I think it would be hard to reasonably describe any hundred pages in the book as devoted to this particular topic. This inclines to me to suggest that Scott may be irritated by such discussion of the topic that comes up because it does not seem relevant to him. I will therefore explain the relevance, namely in relation to a different difficulty which Scott discusses in another post:

There’s something more interesting in Section 7.10 of Surfing Uncertainty [actually 8.10], “Escape From The Darkened Room”. It asks: if the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”.

Section 7.10 [8.10] gives a kind of hand-wave-y answer here, saying that of course organisms have some drives, and probably it makes sense for them to desire novelty and explore new options, and so on. Overall this isn’t too different from PCT’s idea of “intrinsic error”, and as long as we remember that it’s not really predicting anything in particular it seems like a fair response.

Clark’s response may be somewhat “hand-wave-y,” but I think the response might seem slightly more problematic to Scott than it actually is, precisely because he does not understand the idea of embodiment, and how it applies to this situation.

If we think about predictions on a general intellectual level, there is a good reason not to predict that you will not eat something soon. If you do predict this, you will turn out to be wrong, as is often discovered by would-be adopters of extreme fasts or diets. You will in fact eat something soon, regardless of what you think about this; so if you want the truth, you should believe that you will eat something soon.

The “darkened room” problem, however, is not about this general level. The argument is that if the brain is predicting its actions from moment to moment on a subconscious level, then if its main concern is getting accurate predictions, it could just predict an absence of action, and carry this out, and its predictions would be accurate. So why does this not happen? Clark gives his “hand-wave-y” answer:

Prediction-error-based neural processing is, we have seen, part of a potent recipe for multi-scale self-organization. Such multiscale self-organization does not occur in a vacuum. Instead, it operates only against the backdrop of an evolved organismic (neural and gross-bodily) form, and (as we will see in chapter 9) an equally transformative backdrop of slowly accumulated material structure and cultural practices: the socio-technological legacy of generation upon generation of human learning and experience.

To start to bring this larger picture into focus, the first point to notice is that explicit, fast timescale processes of prediction error minimization must answer to the needs and projects of evolved, embodied, and environmentally embedded agents. The very existence of such agents (see Friston, 2011b, 2012c) thus already implies a huge range of structurally implicit creature-specific ‘expectations’. Such creatures are built to seek mates, to avoid hunger and thirst, and to engage (even when not hungry and thirsty) in the kinds of sporadic environmental exploration that will help prepare them for unexpected environmental shifts, resource scarcities, new competitors, and so on. On a moment-by-moment basis, then, prediction error is minimized only against the backdrop of this complex set of creature-defining ‘expectations’.”

In one way, the answer here is a historical one. If you simply ask the abstract question, “would it minimize prediction error to predict doing nothing, and then to do nothing,” perhaps it would. But evolution could not bring such a creature into existence, while it was able to produce a creature that would predict that it would engage the world in various ways, and then would proceed to engage the world in those ways.

The objection, of course, would not be that the creature of the “darkened room” is possible. The objection would be that since such a creature is not possible, it must be wrong to describe the brain as minimizing prediction error. But notice that if you predict that you will not eat, and then you do not eat, you are no more right or wrong than if you predict that you will eat, and then you do eat. Either one is possible from the standpoint of prediction, but only one is possible from the standpoint of history.

This is where being “embodied” is relevant. The brain is not an abstract algorithm which has no content except to minimize prediction error; it is a physical object which works together in physical ways with the rest of the human body to carry out specifically human actions and to live a human life.

On the largest scale of evolutionary history, there were surely organisms that were nourished and reproduced long before there was anything analagous to a mind at work in those organisms. So when mind began to be, and took over some of this process, this could only happen in such a way that it would continue the work that was already there. A “predictive engine” could only begin to be by predicting that nourishment and reproduction would continue, since any attempt to do otherwise would necessarily result either in false predictions or in death.

This response is necessarily “hand-wave-y” in the sense that I (and presumably Clark) do not understand the precise physical implementation. But it is easy to see that it was historically necessary for things to happen this way, and it is an expression of “embodiment” in the sense that “minimize prediction error” is an abstract algorithm which does not and cannot exhaust everything which is there. The objection would be, “then there must be some other algorithm instead.” But this does not follow: no abstract algorithm will exhaust a physical object. Thus for example, animals will fall because they are heavy. Asking whether falling will satisfy some abstract algorithm is not relevant. In a similar way, animals had to be physically arranged in such a way that they would usually eat and reproduce.

I said I would make one additional point, although it may well be related to the above concern. In section 4.8 Clark notes that his account does not need to consider costs and benefits, at least directly:

But the story does not stop there. For the very same strategy here applies to the notion of desired consequences and rewards at all levels. Thus we read that ‘crucially, active inference does not invoke any “desired consequences”. It rests only on experience-dependent learning and inference: experience induces prior expectations, which guide perceptual inference and action’ (Friston, Mattout, & Kilner, 2011, p. 157). Apart from a certain efflorescence of corollary discharge, in the form of downward-flowing predictions, we here seem to confront something of a desert landscape: a world in which value functions, costs, reward signals, and perhaps even desires have been replaced by complex interacting expectations that inform perception and entrain action. But we could equally say (and I think this is the better way to express the point) that the functions of rewards and cost functions are now simply absorbed into a more complex generative model. They are implicit in our sensory (especially proprioceptive) expectations and they constrain behavior by prescribing their distinctive sensory implications.

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.

Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount of trade, if you desire both truth and other things. Eliezer Yudkowsky argues that self-deception, and therefore also such trade, is either impossible or stupid, depending on how it is understood:

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.”  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

A better attitude to this matter is adopted by Robin Hanson, as for example when he discusses motives for having opinions in a post which we previously considered here. Bryan Caplan has a similar view, discussed here.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.

The Practical Argument for Free Will

Richard Chappell discusses a practical argument for free will:

1) If I don’t have free will, then I can’t choose what to believe.
2) If I can choose what to believe, then I have free will [from 1]
3) If I have free will, then I ought to believe it.
4) If I can choose what to believe, then I ought to believe that I have free will. [from 2,3]
5) I ought, if I can, to choose to believe that I have free will. [restatement of 4]

He remarks in the comments:

I’m taking it as analytic (true by definition) that choice requires free will. If we’re not free, then we can’t choose, can we? We might “reach a conclusion”, much like a computer program does, but we couldn’t choose it.

I understand the word “choice” a bit differently, in that I would say that we are obviously choosing in the ordinary sense of the term, if we consider two options which are possible to us as far as we know, and then make up our minds to do one of them, even if it turned out in some metaphysical sense that we were already guaranteed in advance to do that one. Or in other words, Chappell is discussing determinism vs libertarian free will, apparently ruling out compatibilist free will on linguistic grounds. I don’t merely disagree in the sense that I use language differently, but in the sense that I don’t agree that his usage correspond to the normal English usage. [N.B. I misunderstood Richard here. He explains in the comments.] Since people can easily be led astray by such linguistic confusions, given the relationships between thought and language, I prefer to reformulate the argument:

  1. If I don’t have libertarian free will, then I can’t make an ultimate difference in what I believe that was not determined by some initial conditions.
  2. If I can make an ultimate difference in what I believe that was not determined by some initial conditions, then I have libertarian free will [from 1].
  3. If I have libertarian free will, then it is good to believe that I have it.
  4. If I can make an ultimate difference in my beliefs undetermined by initial conditions, then it is good to believe that I have libertarian free will. [from 2, 3]
  5. It is good, if I can, to make a difference in my beliefs undetermined by initial conditions, such that I believe that I have libertarian free will.

We would have to add that the means that can make such a difference, if any means can, would be choosing to believe that I have libertarian free will.

I have reformulated (3) to speak of what is good, rather than of what one ought to believe, for several reasons. First, in order to avoid confusion about the meaning of “ought”. Second, because the resolution of the argument lies here.

The argument is in fact a good argument as far as it goes. It does give a practical reason to hold the voluntary belief that one has libertarian free will. The problem is that it does not establish that it is better overall to hold this belief, because various factors can contribute to whether an action or belief is a good thing.

We can see this with the following thought experiment:

Either people have libertarian free will or they do not. This is unknown. But God has decreed that people who believe that they have libertarian free will go to hell for eternity, while people who believe that they do not, will go to heaven for eternity.

This is basically like the story of the Alien Implant. Having libertarian free will is like the situation where the black box is predicting your choice, and not having it is like the case where the box is causing your choice. The better thing here is to believe that you do not have libertarian free will, and this is true despite whatever theoretical sense you might have that you are “not responsible” for this belief if it is true, just as it is better not to smoke even if you think that your choice is being caused.

But note that if a person believes that he has libertarian free will, and it turns out to be true, he has some benefit from this, namely the truth. But the evil of going to hell presumably outweighs this benefit. And this reveals the fundamental problem with the argument, namely that we need to weigh the consequences overall. We made the consequences heaven and hell for dramatic effect, but even in the original situation, believing that you have libertarian free will when you do not, has an evil effect, namely believing something false, and potentially many evil effects, namely whatever else follows from this falsehood. This means that in order to determine what is better to believe here, it is necessary to consider the consequences of being mistaken, just as it is in general when one formulates beliefs.

Wishful Thinking about Wishful Thinking

Cameron Harwick discusses an apparent relationship between “New Atheism” and group selection:

Richard Dawkins’ best-known scientific achievement is popularizing the theory of gene-level selection in his book The Selfish Gene. Gene-level selection stands apart from both traditional individual-level selection and group-level selection as an explanation for human cooperation. Steven Pinker, similarly, wrote a long article on the “false allure” of group selection and is an outspoken critic of the idea.

Dawkins and Pinker are also both New Atheists, whose characteristic feature is not only a disbelief in religious claims, but an intense hostility to religion in general. Dawkins is even better known for his popular books with titles like The God Delusion, and Pinker is a board member of the Freedom From Religion Foundation.

By contrast, David Sloan Wilson, a proponent of group selection but also an atheist, is much more conciliatory to the idea of religion: even if its factual claims are false, the institution is probably adaptive and beneficial.

Unrelated as these two questions might seem – the arcane scientific dispute on the validity of group selection, and one’s feelings toward religion – the two actually bear very strongly on one another in practice.

After some discussion of the scientific issue, Harwick explains the relationship he sees between these two questions:

Why would Pinker argue that human self-sacrifice isn’t genuine, contrary to introspection, everyday experience, and the consensus in cognitive science?

To admit group selection, for Pinker, is to admit the genuineness of human altruism. Barring some very strange argument, to admit the genuineness of human altruism is to admit the adaptiveness of genuine altruism and broad self-sacrifice. And to admit the adaptiveness of broad self-sacrifice is to admit the adaptiveness of those human institutions that coordinate and reinforce it – namely, religion!

By denying the conceptual validity of anything but gene-level selection, therefore, Pinker and Dawkins are able to brush aside the evidence on religion’s enabling role in the emergence of large-scale human cooperation, and conceive of it as merely the manipulation of the masses by a disingenuous and power-hungry elite – or, worse, a memetic virus that spreads itself to the detriment of its practicing hosts.

In this sense, the New Atheist’s fundamental axiom is irrepressibly religious: what is true must be useful, and what is false cannot be useful. But why should anyone familiar with evolutionary theory think this is the case?

As another example of the tendency Cameron Harwick is discussing, we can consider this post by Eliezer Yudkowsky:

Perhaps the real reason that evolutionary “just-so stories” got a bad name is that so many attempted stories are prima facie absurdities to serious students of the field.

As an example, consider a hypothesis I’ve heard a few times (though I didn’t manage to dig up an example).  The one says:  Where does religion come from?  It appears to be a human universal, and to have its own emotion backing it – the emotion of religious faith.  Religion often involves costly sacrifices, even in hunter-gatherer tribes – why does it persist?  What selection pressure could there possibly be for religion?

So, the one concludes, religion must have evolved because it bound tribes closer together, and enabled them to defeat other tribes that didn’t have religion.

This, of course, is a group selection argument – an individual sacrifice for a group benefit – and see the referenced posts if you’re not familiar with the math, simulations, and observations which show that group selection arguments are extremely difficult to make work.  For example, a 3% individual fitness sacrifice which doubles the fitness of the tribe will fail to rise to universality, even under unrealistically liberal assumptions, if the tribe size is as large as fifty.  Tribes would need to have no more than 5 members if the individual fitness cost were 10%.  You can see at a glance from the sex ratio in human births that, in humans, individual selection pressures overwhelmingly dominate group selection pressures.  This is an example of what I mean by prima facie absurdity.

It does not take much imagination to see that religion could have “evolved because it bound tribes closer together” without group selection in a technical sense having anything to do with this process. But I will not belabor this point, since Eliezer’s own answer regarding the origin of religion does not exactly keep his own feelings hidden:

So why religion, then?

Well, it might just be a side effect of our ability to do things like model other minds, which enables us to conceive of disembodied minds.  Faith, as an emotion, might just be co-opted hope.

But if faith is a true religious adaptation, I don’t see why it’s even puzzling what the selection pressure could have been.

Heretics were routinely burned alive just a few centuries ago.  Or stoned to death, or executed by whatever method local fashion demands.  Questioning the local gods is the notional crime for which Socrates was made to drink hemlock.

Conversely, Huckabee just won Iowa’s nomination for tribal-chieftain.

Why would you need to go anywhere near the accursèd territory of group selectionism in order to provide an evolutionary explanation for religious faith?  Aren’t the individual selection pressures obvious?

I don’t know whether to suppose that (1) people are mapping the question onto the “clash of civilizations” issue in current affairs, (2) people want to make religion out to have some kind of nicey-nice group benefit (though exterminating other tribes isn’t very nice), or (3) when people get evolutionary hypotheses wrong, they just naturally tend to get it wrong by postulating group selection.

Let me give my own extremely credible just-so story: Eliezer Yudkowsky wrote this not fundamentally to make a point about group selection, but because he hates religion, and cannot stand the idea that it might have some benefits. It is easy to see this from his use of language like “nicey-nice,” and his suggestion that the main selection pressure in favor of religion would be likely to be something like being burned at the stake, or that it might just have been a “side effect,” that is, that there was no advantage to it.

But as St. Paul says, “Therefore you have no excuse, whoever you are, when you judge others; for in passing judgment on another you condemn yourself, because you, the judge, are doing the very same things.” Yudkowsky believes that religion is just wishful thinking. But his belief that religion therefore cannot be useful is itself nothing but wishful thinking. In reality religion can be useful just as voluntary beliefs in general can be useful.

Vaguely Trading Away Truth

Robin Hanson asks his readers about religion:

Consider two facts:

  1. People with religious beliefs, and associated behavior, consistently tend to have better lives. It seems that religious folks tend to be happier, live longer, smoke less, exercise more, earn more, get and stay married more, commit less crime, use less illegal drugs, have more social connections, donate and volunteer more, and have more kids. Yes, the correlation between religion and these good things is in part because good people tend to become more religious, but it is probably also in part because religious people tend to become better. So if you want to become good in these ways, an obvious strategy is to become more religious, which is helped by having more religious beliefs.
  2. Your far beliefs, such as on religion and politics, can’t effect your life much except via how they effect your behavior, and your associates’ opinions of you. When you think about cosmology, ancient Rome, the nature of world government, or starving folks in Africa, it might feel like those things matter to you. But in terms of the kinds of things that evolution could plausibly have built you to actually care about (vs. pretend to care about), those far things just can’t directly matter much to your life. While your beliefs about far things might influence how you act, and what other people think of you, their effects on your quality of life, via such channels of influence, don’t depend much on whether these beliefs are true.

Perhaps, like me, you find religious beliefs about Gods, spirits, etc. to be insufficiently supported by evidence, coherence, or simplicity to be a likely approximation to the truth. Even so, ask yourself: why care so much about truth? Yes, you probably think you care about believing truth – but isn’t it more plausible that you mainly care about thinking you like truth? Doesn’t that have a more plausible evolutionary origin than actually caring about far truth?

Yes, there are near practical areas of your life where truth can matter a lot. But most religious people manage to partition their beliefs, so their religious beliefs don’t much pollute their practical beliefs. And this doesn’t even seem to require much effort on their part. Why not expect that you could do similarly?

Yes, it might seem hard to get yourself to believe things that seem implausible to you at the moment, but we humans have lots of well-used ways to get ourselves to believe things we want to believe. Are you willing to start trying those techniques on this topic?

Now, a few unusual people might have an unusually large influence on far topics, and to those people truth about far topics might plausibly matter more to their personal lives, and to things that evolution might plausibly have wanted them to directly care about. For example, if you were king of the world, maybe you’d reasonably care more about what happens to the world as a whole.

But really, what are the chances that you are actually such a person? And if not, why not try to be more religious?

Look, Robin is saying, maybe you think that religions aren’t true. But the fact is that it isn’t very plausible that you care that much about truth anyway. So why not be religious anyway, regardless of the truth, since there are known benefits to this?

A few days after the above post, Robin points out some evidence that stories tend to distort a person’s beliefs about the world, and then says:

A few days ago I asked why not become religious, if it will give you a better life, even if the evidence for religious beliefs is weak? Commenters eagerly declared their love of truth. Today I’ll ask: if you give up the benefits of religion, because you love far truth, why not also give up stories, to gain even more far truth? Alas, I expect that few who claim to give up religion because they love truth will also give up stories for the same reason. Why?

One obvious explanation: many of you live in subcultures where being religious is low status, but loving stories is high status. Maybe you care a lot less about far truth than you do about status.

We have discussed in an earlier post some of the reasons why stories can distort a person’s opinions about the world.

It is very plausible to me that Robin’s proposed explanation, namely status seeking, does indeed exercise a great deal of influence among his target audience. But this would not tend to be a very conscious process, and would likely be expressed consciously in other ways. A more likely conscious explanation would be this representative comment from one of Robin’s readers:

There is a clear difference in choosing to be religious and choosing to partake in a story. By being religious, you profess belief in some set of ideas on the nature of the world. If you read a fictional story, there is no belief. Religions are supposed to be taken as fact. It is non-fiction, whether it’s true or not. Fictional stories are known to not be true. You don’t sacrifice any of a love for truth as you’ve put it by digesting the contents of a fictional story, because none of the events of the story are taken as fact, whereas religious texts are to be taken as fact. Aristotle once said, “It is the mark of an educated mind to be able to entertain a thought without accepting it.” When reading fictional stories, you know that the events aren’t real, but entertain the circumstances created in the story to be able to increase our understanding of ourselves, others, and the world. This is the point of the stories, and they thereby aid in the search for truth, as we have to ask ourselves questions about how we would relate in similar situations. The authors own ideas shown in the story may not be what you personally believe in, but the educated mind can entertain the ideas and not believe in them, increasing our knowledge of the truth by opening ourselves up to others viewpoints. Religions are made to be believed without any real semblance of proof, there is no entertaining the idea, only acceptance of it. This is where truth falls out the window, as where there is no proof, the truth cannot be ascertained.

The basic argument would be that if a non-religious person simply decides to be religious, he is choosing to believe something he thinks to be false, which is against the love of truth. But if the person reads a story, he is not choosing to believe anything he thinks to be false, so he is not going against the love of truth.

For Robin, the two situations are roughly equivalent, because there are known reasons why reading fiction will distort one’s beliefs about the world, even if we do not know in advance the particular false beliefs we will end up adopting, or the particular false beliefs that we will end up thinking more likely, or the true beliefs that we might lose or consider less likely.

But there is in fact a difference. This is more or less the difference between accepting the real world and accepting the world of Omelas. In both cases evils are accepted, but in one case they are accepted vaguely, and in the other clearly and directly. In a similar way, it would be difficult for a person to say, “I am going to start believing this thing which I currently think to be false, in order to get some benefit from it,” and much easier to say, “I will do this thing which will likely distort my beliefs in some vague way, in order to get some benefit from it.”

When accepting evil for the sake of good, we are more inclined to do it in this vague way in general. But this is even more the case when we trade away truth in particular for the sake of other things. In part this is precisely because of the more apparent absurdity of saying, “I will accept the false as true for the sake of some benefit,” although Socrates would likely respond that it would be equally absurd to say, “I will do the evil as though it were good for the sake of some benefit.”

Another reason why this is more likely, however, is that it is easier for a person to tell himself that he is not giving up any truth at all; thus the author of the comment quoted above asserted that reading fiction does not lead to any false beliefs whatsoever. This is related to what I said in the post here: trading the truth for something else, even vaguely, implies less love of truth than refusing the trade, and consequently the person may not care enough to accurately discern whether or not they are losing any truth.

Those Who Walk Away from Omelas

In The Brothers Karamazov, after numerous examples of the torture of children and other horrors, Ivan Karamazov rejects theodicy with this argument:

“Besides, too high a price is asked for harmony; it’s beyond our means to pay so much to enter on it. And so I hasten to give back my entrance ticket, and if I am an honest man I am bound to give it back as soon as possible. And that I am doing. It’s not God that I don’t accept, Alyosha, only I most respectfully return him the ticket.”

“That’s rebellion,” murmured Alyosha, looking down.

“Rebellion? I am sorry you call it that,” said Ivan earnestly. “One can hardly live in rebellion, and I want to live. Tell me yourself, I challenge your answer. Imagine that you are creating a fabric of human destiny with the object of making men happy in the end, giving them peace and rest at last, but that it was essential and inevitable to torture to death only one tiny creature — that baby beating its breast with its fist, for instance — and to found that edifice on its unavenged tears, would you consent to be the architect on those conditions? Tell me, and tell the truth.”

“No, I wouldn’t consent,” said Alyosha softly.

Ivan’s argument is that a decent human being would not be willing to bring good out of evil in the particular way that happens in the universe, and therefore much less should a good God be willing to do that.

I will leave aside the theological argument for the moment, although it is certainly worthy of discussion.

Ursula Le Guin wrote a short story or thought experiment about this situation called The Ones Who Walk Away From Omelas. There is supposedly a perfectly happy society, but it all depends on the torture of a single child. Everybody knows about this, and at a certain age they are brought to see the child. Two very different responses to this are described:

The terms are strict and absolute; there may not even be a kind word spoken to the child.

Often the young people go home in tears, or in a tearless rage, when they have seen the child and faced this terrible paradox. They may brood over it for weeks or years. But as time goes on they begin to realize that even if the child could be released, it would not get much good of its freedom: a little vague pleasure of warmth and food, no doubt, but little more. It is too degraded and imbecile to know any real joy. It has been afraid too long ever to be free of fear. Its habits are too uncouth for it to respond to humane treatment. Indeed, after so long it would probably be wretched without walls about it to protect it, and darkness for its eyes, and its own excrement to sit in. Their tears at the bitter injustice dry when they begin to perceive the terrible justice of reality, and to accept it. Yet it is their tears and anger, the trying of their generosity and the acceptance of their helplessness, which are perhaps the true source of the splendor of their lives. Theirs is no vapid, irresponsible happiness. They know that they, like the child, are not free. They know compassion. It is the existence of the child, and their knowledge of its existence, that makes possible the nobility of their architecture, the poignancy of their music, the profundity of their science. It is because of the child that they are so gentle with children. They know that if the wretched one were not there snivelling in the dark, the other one, the flute-player, could make no joyful music as the young riders line up in their beauty for the race in the sunlight of the first morning of summer.

Now do you believe in them? Are they not more credible? But there is one more thing to tell, and this is quite incredible.

At times one of the adolescent girls or boys who go to see the child does not go home to weep or rage, does not, in fact, go home at all. Sometimes also a man or woman much older falls silent for a day or two, and then leaves home. These people go out into the street, and walk down the street alone. They keep walking, and walk straight out of the city of Omelas, through the beautiful gates. They keep walking across the farmlands of Omelas. Each one goes alone, youth or girl man or woman. Night falls; the traveler must pass down village streets, between the houses with yellow-lit windows, and on out into the darkness of the fields. Each alone, they go west or north, towards the mountains. They go on. They leave Omelas, they walk ahead into the darkness, and they do not come back. The place they go towards is a place even less imaginable to most of us than the city of happiness. I cannot describe it at all. It is possible that it does not exist. But they seem to know where they are going, the ones who walk away from Omelas.

Some would argue that the ones who walk away are simply confused. In the real world we are constantly permitting evils for the sake of other goods, and as a whole the evils included here are much greater than the torture of a single child. So Omelas should actually be much better and much more acceptable than the real world.

This response however is mistaken, because the real issue is one about the moral object. It is not enough to say that the good outweighs the evil, because a case of doing evil for the sake of good remains a case of doing evil. This is a little more confusing in the story, where one could interpret the actions of those who stay to be merely negative: they are not the ones who brought the situation about or maintain it. But in Ivan’s example, the question is whether you are willing to torture a child for the sake of the universal harmony, and Ivan’s implication is that if there is to be a universal harmony, God must be willing to torture people, and in general to cause all the evils of the world, to bring it about.

In any case, whether people are right or wrong about what they do, it is certainly true that we are much more willing to permit evils in a vague and general way to bring about good, than we are to produce evils in a very direct way to bring about good.