The Practical Argument for Free Will

Richard Chappell discusses a practical argument for free will:

1) If I don’t have free will, then I can’t choose what to believe.
2) If I can choose what to believe, then I have free will [from 1]
3) If I have free will, then I ought to believe it.
4) If I can choose what to believe, then I ought to believe that I have free will. [from 2,3]
5) I ought, if I can, to choose to believe that I have free will. [restatement of 4]

He remarks in the comments:

I’m taking it as analytic (true by definition) that choice requires free will. If we’re not free, then we can’t choose, can we? We might “reach a conclusion”, much like a computer program does, but we couldn’t choose it.

I understand the word “choice” a bit differently, in that I would say that we are obviously choosing in the ordinary sense of the term, if we consider two options which are possible to us as far as we know, and then make up our minds to do one of them, even if it turned out in some metaphysical sense that we were already guaranteed in advance to do that one. Or in other words, Chappell is discussing determinism vs libertarian free will, apparently ruling out compatibilist free will on linguistic grounds. I don’t merely disagree in the sense that I use language differently, but in the sense that I don’t agree that his usage correspond to the normal English usage. [N.B. I misunderstood Richard here. He explains in the comments.] Since people can easily be led astray by such linguistic confusions, given the relationships between thought and language, I prefer to reformulate the argument:

  1. If I don’t have libertarian free will, then I can’t make an ultimate difference in what I believe that was not determined by some initial conditions.
  2. If I can make an ultimate difference in what I believe that was not determined by some initial conditions, then I have libertarian free will [from 1].
  3. If I have libertarian free will, then it is good to believe that I have it.
  4. If I can make an ultimate difference in my beliefs undetermined by initial conditions, then it is good to believe that I have libertarian free will. [from 2, 3]
  5. It is good, if I can, to make a difference in my beliefs undetermined by initial conditions, such that I believe that I have libertarian free will.

We would have to add that the means that can make such a difference, if any means can, would be choosing to believe that I have libertarian free will.

I have reformulated (3) to speak of what is good, rather than of what one ought to believe, for several reasons. First, in order to avoid confusion about the meaning of “ought”. Second, because the resolution of the argument lies here.

The argument is in fact a good argument as far as it goes. It does give a practical reason to hold the voluntary belief that one has libertarian free will. The problem is that it does not establish that it is better overall to hold this belief, because various factors can contribute to whether an action or belief is a good thing.

We can see this with the following thought experiment:

Either people have libertarian free will or they do not. This is unknown. But God has decreed that people who believe that they have libertarian free will go to hell for eternity, while people who believe that they do not, will go to heaven for eternity.

This is basically like the story of the Alien Implant. Having libertarian free will is like the situation where the black box is predicting your choice, and not having it is like the case where the box is causing your choice. The better thing here is to believe that you do not have libertarian free will, and this is true despite whatever theoretical sense you might have that you are “not responsible” for this belief if it is true, just as it is better not to smoke even if you think that your choice is being caused.

But note that if a person believes that he has libertarian free will, and it turns out to be true, he has some benefit from this, namely the truth. But the evil of going to hell presumably outweighs this benefit. And this reveals the fundamental problem with the argument, namely that we need to weigh the consequences overall. We made the consequences heaven and hell for dramatic effect, but even in the original situation, believing that you have libertarian free will when you do not, has an evil effect, namely believing something false, and potentially many evil effects, namely whatever else follows from this falsehood. This means that in order to determine what is better to believe here, it is necessary to consider the consequences of being mistaken, just as it is in general when one formulates beliefs.

Alien Implant: Newcomb’s Smoking Lesion

In an alternate universe, on an alternate earth, all smokers, and only smokers, get brain cancer. Everyone enjoys smoking, but many resist the temptation to smoke, in order to avoid getting cancer. For a long time, however, there was no known cause of the link between smoking and cancer.

Twenty years ago, autopsies revealed tiny black boxes implanted in the brains of dead persons, connected to their brains by means of intricate wiring. The source and function of the boxes and of the wiring, however, remains unknown. There is a dial on the outside of the boxes, pointing to one of two positions.

Scientists now know that these black boxes are universal: every human being has one. And in those humans who smoke and get cancer, in every case, the dial turns out to be pointing to the first position. Likewise, in those humans who do not smoke or get cancer, in every case, the dial turns out to be pointing to the second position.

It turns out that when the dial points to the first position, the black box releases dangerous chemicals into the brain which cause brain cancer.

Scientists first formed the reasonable hypothesis that smoking causes the dial to be set to the first position. Ten years ago, however, this hypothesis was definitively disproved. It is now known with certainty that the box is present, and the dial pointing to its position, well before a person ever makes a decision about smoking. Attempts to read the state of the dial during a person’s lifetime, however, result most unfortunately in an explosion of the equipment involved, and the gruesome death of the person.

Some believe that the black box must be reading information from the brain, and predicting a person’s choice. “This is Newcomb’s Problem,” they say. These persons choose not to smoke, and they do not get cancer. Their dials turn out to be set to the second position.

Others believe that such a prediction ability is unlikely. The black box is writing information into the brain, they believe, and causing a person’s choice. “This is literally the Smoking Lesion,” they say.  Accepting Andy Egan’s conclusion that one should smoke in such cases, these persons choose to smoke, and they die of cancer. Their dials turn out to be set to the first position.

Still others, more perceptive, note that the argument about prediction or causality is utterly irrelevant for all practical purposes. “The ritual of cognition is irrelevant,” they say. “What matters is winning.” Like the first group, these choose not to smoke, and they do not get cancer. Their dials, naturally, turn out to be set to the second position.

 

Chastek on Determinism

On a number of occasions, James Chastek has referred to the impossibility of a detailed prediction of the future as an argument for libertarian free will. This is a misunderstanding. It is impossible to predict the future in detail for the reasons given in the linked post, and this has nothing to do with libertarian free will or even any kind of free will at all.

The most recent discussions of this issue at Chastek’s blog are found here and here. The latter post:

Hypothesis: A Laplacian demon, i.e. a being who can correctly predict all future actions, contradicts our actual experience of following instructions with some failure rate.

Set up: You are in a room with two buttons, A and B. This is the same set-up Soon’s free-will experiment, but the instructions are different.

Instructions: You are told that you will have to push a button every 30 seconds, and that you will have fifty trials. The clock will start when a sheet of paper comes out of a slit in the wall that says A or B. Your instructions are to push the opposite of whatever letter comes out.

The Apparatus: the first set of fifty trials is with a random letter generator. The second set of trials is with letters generated by a Laplacian demon who knows the wave function of the universe and so knows in advance what button will be pushed and so prints out the letter.

The Results: In the first set of trials, which we can confirm with actual experience, the success rate is close to 100%, but, the world being what it is, there is a 2% mistake rate in the responses. In the second set of trials the success rate is necessarily 0%. In the first set of trials, subject report feelings of boredom, mild indifference, continual daydreaming, etc. The feelings expressed in the second trial might be any or all of the following: some say they suddenly developed a pathological desire to subvert the commands of the experiment, others express feelings of being alienated from their bodies, trying to press one button and having their hand fly in the other direction, others insist that they did follow instructions and consider you completely crazy for suggesting otherwise, even though you can point to video evidence of them failing to follow the rules of the experiment, etc.

The Third Trial: Run the trial a third time, this time giving the randomly generated letter to the subject and giving the Laplacian letter to the experimenter. Observe all the trials where the two generate the same number, and interate the experiment until one has fifty trials. Our actual experience tells us that the subject will have a 98% success rate, but our theoretical Laplacian demon tells us that the success rate should be necessarily 0%. Since asserting that the random-number generator and the demon will never have the same response would make the error-rate necessarily disappear and cannot explain our actual experience of failures, the theoretical postulation of a Laplacian demon contradicts our actual experience. Q.E.D.

The post is phrased as a proof that Laplacian demons cannot exist, but in fact Chastek intends it to establish the existence of libertarian free will, which is a quite separate thesis; no one would be surprised if Laplacian demons cannot exist in the real world, but many people would be surprised if people turn out to have libertarian free will.

I explain in the comments there the problem with this argument:

Here is what happens when you set up the experiment. You approach the Laplacian demon and ask him to write the letter that the person is going to choose for the second set of 50 trials.

The demon will respond, “That is impossible. I know the wave function of the universe, and I know that there is no possible set of As and Bs such that, if that is the set written, it will be the set chosen by the person. Of course, I know what will actually be written, and I know what the person will do. But I also know that those do not and cannot match.”

In other words, you are right that the experiment is impossible, but this is not reason to believe that Laplacian demons are impossible; it is a reason to believe that it is impossible for anything to write what the person is going to do.

E.g. if your argument works, it proves either that God does not exist, or that he does not know the future. Nor can one object that God’s knowledge is eternal rather than of the future, since it is enough if God can write down what is going to happen, as he is thought to have done e.g. in the text, “A virgin will conceive etc.”

If you answer, as you should, that God cannot write what the person will do, but he can know it, the same applies to the Laplacian demon.

As another reality check here, according to St. Thomas a dog is “determinate to one” such that in the same circumstances it will do the same thing. But we can easily train a dog in such a way that no one can possibly write down the levers it will choose, since it will be trained to choose the opposite ones.

And still another: a relatively simple robot, programmed in the same way. We don’t need a Laplacian demon, since we can predict ourselves in every circumstance what it will do. But we cannot write that down, since then we would predict the opposite of what we wrote. And it is absolutely irrelevant that the robot is an “instrument,” since the argument does not have any premise saying that human beings are not instruments.

As for the third set, if I understood it correctly you are indeed cherry picking — you are simply selecting the trials where the human made a mistake, and saying, “why did he consistently make a mistake in these cases?” There is no reason; you simply selected those cases.

Chastek responds to this comment in a fairly detailed way. Rather than responding directly to the comment there, I ask him to comment on several scenarios. The first scenario:

If I drop a ball on a table, and I ask you to predict where it is going to first hit the table, and say, “Please predict where it is going to first hit the table, and let me know your prediction by covering the spot with your hand and keeping it there until the trial is over,” is it clear to you that:

a) it will be impossible for you to predict where it is going to first hit in this way, since if you cover a spot it cannot hit there

and

b) this has nothing whatsoever to do with determinism or indeterminism of anything.

The second scenario:

Let’s make up a deterministic universe. It has no human beings, no rocks, nothing but numbers. The wave function of the universe is this: f(x)=x+1, where x is the initial condition and x+1 is the second condition.

We are personally Laplacian demons compared to this universe. We know what the second condition will be for any original condition.

Now give us the option of setting the original condition, and say:

Predict the second condition, and set that as the initial condition. This should lead to a result like (1,1) or (2,2), which contradicts our experience that the result is always higher than the original condition. So the hypothesis that we know the output given the input must be false.

The answer: No. It is not false that we know the output given the input. We know that these do not and cannot match, not because of anything indeterminate, but because the universe is based on the completely deterministic rule that f(x)=x+1, not f(x)=x.

Is it clear:

a) why a Laplacian demon cannot set the original condition to the resulting condition
b) this has nothing to do with anything being indeterminate
c) there is no absurdity in a Laplacian demon for a universe like this

The reason why I presented these questions instead of responding directly to his comments is that his comments are confused, and an understanding of these situations would clear up that confusion. For unclear reasons, Chastek failed to respond to these questions. Nonetheless, I will respond to his detailed comments in the light of the above explanations. Chastek begins:

Here are my responses:

That is impossible… I know what will actually be written, and I know what the person will do. But I also know that those do not and cannot match

But “what will actually be written” is, together with a snapshot of the rest of the universe, an initial condition and “what the person will do” is an outcome. Saying these “can never match” means the demon is saying “the laws of nature do not suffice to go from some this initial condition to one of its outcomes” which is to deny Laplacian demons altogether.

The demon is not saying that the laws of nature do not suffice to go from an initial condition to an outcome. It is saying that “what will actually be written” is part of the initial conditions, and that it is an initial condition that is a determining factor that prevents itself from matching the outcome. In the case of the dropping ball above, covering the spot with your hand is an initial condition, and it absolutely prevents the outcome being that the ball first hits there. In the case of f(x), x is an initial condition, and it prevents the outcome from being x, since it will always be x+1. In the same way, in Chastek’s experiment, what is written is an initial condition which prevents the outcome from being that thing which was written.

If you answer, as you should, that God cannot write what the person will do, but he can know it, the same applies to the Laplacian demon.

When God announces what will happen he can be speaking about what he intends to do, while a LD cannot. I’m also very impressed by John of St. Thomas’s arguments that the world is not only notionally present to God but even physically present within him, which makes for a dimension of his speaking of the future that could never be said of an LD. This is in keeping with the Biblical idea that God not only looks at the world but responds and interacts with it. The character of prophesy is also very different from the thought experiment we’re trying to do with an LD: LD’s are all about what we can predict in advance, but Biblical prophesies do not seem to be overly concerned with what can be predicted in advance, as should be shown from the long history of failed attempts to turn the NT into a predictive tool.

If God says, “the outcome will be A,” and then consistently causes the person to choose A even when the person has hostile intentions, this will be contrary to our experience in the same way that the Laplacian demon would violate our experience if it always got the outcome right. You can respond, “ok, but that’s fine, because we’re admitting that God is a cause, but the Laplacian demon is not supposed to be affecting the outcome.” The problem with the response is that God is supposed to be the cause all of the time, not merely some of the time; so why should he not also say what is going to happen, since he is causing it anyway?

I agree that prophecy in the real world never tells us much detail about the future in fact, and this is verified in all biblical prophecies and in all historical cases such as the statements about the future made by the Fatima visionaries. I also say that even in principle God could not consistently predict in advance a person’s actions, and show him those predictions, without violating his experience of choice, but I say that this is for the reasons given here.

But the point of my objection was not about how prophecy works in the real world. The point was that Catholic doctrine seems to imply that God could, if he wanted, announce what the daily weather is going to be for the next year. It would not bother me personally if this turns out to be completely impossible; but is Chastek prepared to say the same? The real issues with the Laplacian demon are the same: knowing exactly what is going to happen, and to what degree it can announce what it knows.

we can easily train a dog in such a way that no one can possibly write down the levers it will choose, since it will be trained to choose the opposite ones.

Such an animal would follow instructions with some errors, and so would be a fine test subject for my experiment. This is exactly what my subject does in trial #1. I say the same for your robot example.

(ADDED LATER) I’m thankful for this point and developed for reasons given above on the thread.

This seems to indicate the source of the confusion, relative to my examples of covering the place where the ball hits, and the case of the function f(x) = x+1. There is no error rate in these situations: the ball never hits the spot you cover, and f(x) never equals x.

But this is really quite irrelevant. The reason the Laplacian demon says that the experiment is impossible has nothing to do with the error rate, but with the anti-correlation between what is written and the outcome. Consider: suppose in fact you never make a mistake. There is no error rate. Nonetheless, the demon still cannot say what you are going to do, because you always do the opposite of what it says. Likewise, even if the dog never fails to do what it was trained to do, it is impossible for the Laplacian demon to say what it is going to do, since it always does the opposite. The same is true for the robot. In other words, my examples show the reason why the experiment is impossible, without implying that a Laplacian demon is impossible.

We can easily reconstruct my examples to contain an error rate, and nonetheless prediction will be impossible for the same reasons, without implying that anything is indeterminate. For example:

Suppose that the world is such that every tenth time you try to cover a spot, your hand slips off and stops blocking it. I specify every tenth time to show that determinism has nothing to do with this: the setup is completely determinate. In this situation, you are able to indicate the spot where the ball will hit every tenth time, but no more often than that.

Likewise suppose we have f(x) = x+1, with one exception such that f(5) = 5. If we then ask the Laplacian demon (namely ourselves) to provide five x such that the output equals the input, we will not be able to do it in five cases, but we will be able to do it in one. Since this universe (the functional universe) is utterly deterministic, the fact that we cannot present five such cases does not indicate something indeterminate. It just indicates a determinate fact about how the function universe works.

As for the third set, if I understood it correctly you are indeed cherry picking — you are simply selecting the trials where the human made a mistake,

LD’s can’t be mistaken. If they foresee outcome O from initial conditions C, then no mistake can fail to make O come about. But this isn’t my main point, which is simply to repeat what I said to David: cherry picking requires disregarding evidence that goes against your conclusion, but the times when the random number generator and the LD disagree provide no evidence whether LD’s are consistent with our experience of following instructions with some errors.

I said “if I understood it correctly” because the situation was not clearly laid out. I understood the setup to be this: the Laplacian demon writes out fifty letters, A or B, being the letters it sees that I am going to write. It does not show me this series of letters. Instead, a random process outputs a series of letters, A or B, and each time I try to select the opposite letter.

Given this setup, what the Laplacian demon writes always matches what I select. And most of the time, both are the opposite of what was output by the random process. But occasionally I make a mistake, that is, I fail to select the opposite letter, and choose the same letter that the random process chose. In these cases, since the Laplacian demon still knew what was going to happen, the demon’s letter also matches the random process letter, and my letter.

Now, Chastek says, consider only the cases where the demon’s letter is the same as the random process letter. It will turn out that over those cases, I have a 100% failure rate: that is, in every such case I selected the same letter as the random process. According to him, we should consider this surprising, since we would not normally have a 100% failure rate. This is not cherry picking, he says, because “the times when the random number generator and the LD disagree provide no evidence whether LD’s are consistent with our experience of following instructions with some errors.”

The problem with this should be obvious. Let us consider demon #2: he looks at what the person writes, and then writes down the same thing. Is this demon possible? There will be some cases where demon #2 writes down the opposite of what the random process output: those will be the cases where the person did not make a mistake. But there will be other cases where the person makes a mistake. In those cases, what the person writes, and what demon #2 writes, will match the output of the random process. Consider only those cases. The person has a 100% failure rate in those cases. The cases where the random process and demon #2 disagree provide no evidence whether demon #2 is consistent with our experience, so this is not cherry picking. Now it is contrary to our experience to have a 100% failure rate. So demon #2 is impossible.

This result is of course absurd – demon#2 is obviously entirely possible, since otherwise making copies of things would be impossible. This is sufficient to establish that Chastek’s response is mistaken. He is indeed cherry picking: he simply selected the cases where the human made a mistake, and noted that there was a 100% failure rate in those cases.

In other words, we do not need a formal answer to Chastek’s objection to see that there is something very wrong with it; but the formal answer is that the cases where the demon disagrees with the random process do indeed provide some evidence. They question is whether the existence of the demon is consistent with “our experience of following instructions with some errors.” But we cannot have this experience without sometimes following the instructions correctly; being right is part of this experience, just like being wrong. And the cases where the demon disagrees with the random process are cases where we follow the instructions correctly, and such cases provide evidence that the demon is possible.

Chastek provides an additional comment about the case of the dog:

Just a note, one point I am thankful to EU for is the idea that a trained dog might be a good test subject too. If this is right, then the recursive loop might not be from intelligence as such but the intrinsic indeterminism of nature, which we find in one way through (what Aristotle called) matter being present in the initial conditions and the working of the laws and in another through intelligence. But space is opened for one with the allowing of the other, since on either account nature has to allow for teleology.

I was pointing to St. Thomas in my response with the hope that St. Thomas’s position would at least be seen as reasonable; and there is no question that St. Thomas believes that there is no indeterminism whatsoever in the behavior of a dog. If a dog is in the same situation, he believes, it will do exactly the same thing. In any case, Chastek does not address this, so I will not try at this time to establish the fact of St. Thomas’s position.

The main point is that, as we have already shown, the reason it is impossible to predict what the dog will do has nothing to do with indeterminism, since such prediction is impossible even if the dog is infallible, and remains impossible even if the dog has a deterministic error rate.

The comment, “But space is opened for one with the allowing of the other, since on either account nature has to allow for teleology,” may indicate why Chastek is so insistent in his error: in his opinion, if nature is deterministic, teleology is impossible. This is a mistake much like Robin Hanson’s mistake explained in the previous post. But again I will leave this for later consideration.

I will address one last comment:

I agree the physical determinist’s equation can’t be satisfied for all values, and that what makes it possible is the presence of a sort of recursion. But in the context of the experiment this means that the letter on a sheet of paper together with a snapshot of the rest of the universe can never be an initial condition, but I see no reason why this would be the case. Even if I granted their claim that there was some recursive contradiction, it does not arise merely because the letter is given in advance, since the LD could print out the letter in advance just fine if the initial conditions were, say, a test particle flying though empty space toward button A with enough force to push it.

It is true that the contradiction does not arise just because the Laplacian demon writes down the letter. There is no contradiction even in the human case, if the demon does not show it to the human. Nor does anything contrary to our experience happen in such a case. The case which is contrary to our experience is when the demon shows the letter to the person; and this is indeed impossible on account of a recursive contradiction, not because the demon is impossible.

Consider the case of the test particle flying towards button A: it is not a problem for the demon to write down the outcome precisely because what is written has no particular influence, in this case, on the outcome.

But if “writing the letter” means covering the button, as in our example of covering the spot where the ball will hit, then the demon will not be able to write the outcome in advance. And obviously this will not mean there is any indeterminism.

The contradiction comes about because covering the button prevents the button from being pushed. And the contradiction comes about in the human case in exactly the same way: writing a letter causes, via the human’s intention to follow the instructions, the opposite outcome. Again indeterminism has nothing to do with this: the same thing will happen if the human is infallible, or if the human has an error rate which has deterministic causes.

“This means that the letter on a sheet of paper together with a snapshot of the rest of the universe can never be an initial condition.” No, it means that in some of the cases, namely those where the human will be successful in following instructions, the letter with the rest of the universe cannot be an initial condition where the outcome is the same as what is written. While there should be no need to repeat the reasons for this at this point, the reason is that “what is written” is a cause of the opposite outcome, and whether that causality is deterministic or indeterministic has nothing to do with the impossibility. The letter can indeed be an initial condition: but it is an initial condition where the outcome is the opposite of the letter, and the demon knows all this.

Age of Em

This is Robin Hanson’s first book. Hanson gradually introduces his topic:

You, dear reader, are special. Most humans were born before 1700. And of those born after, you are probably richer and better educated than most. Thus you and most everyone you know are special, elite members of the industrial era.

Like most of your kind, you probably feel superior to your ancestors. Oh, you don’t blame them for learning what they were taught. But you’d shudder to hear of many of your distant farmer ancestors’ habits and attitudes on sanitation, sex, marriage, gender, religion, slavery, war, bosses, inequality, nature, conformity, and family obligations. And you’d also shudder to hear of many habits and attitudes of your even more ancient forager ancestors. Yes, you admit that lacking your wealth your ancestors couldn’t copy some of your habits. Even so, you tend to think that humanity has learned that your ways are better. That is, you believe in social and moral progress.

The problem is, the future will probably hold new kinds of people. Your descendants’ habits and attitudes are likely to differ from yours by as much as yours differ from your ancestors. If you understood just how different your ancestors were, you’d realize that you should expect your descendants to seem quite strange. Historical fiction misleads you, showing your ancestors as more modern than they were. Science fiction similarly misleads you about your descendants.

As an example of the kind of past difference that Robin is discussing, even in the fairly recent past, consider this account by William Ewald of a trial from the sixteenth century:

In 1522 some rats were placed on trial before the ecclesiastical court in Autun. They were charged with a felony: specifically, the crime of having eaten and wantonly destroyed some barley crops in the jurisdiction. A formal complaint against “some rats of the diocese” was presented to the bishop’s vicar, who thereupon cited the culprits to appear on a day certain, and who appointed a local jurist, Barthelemy Chassenée (whose name is sometimes spelled Chassanée, or Chasseneux, or Chasseneuz), to defend them. Chassenée, then forty-two, was known for his learning, but not yet famous; the trial of the rats of Autun was to establish his reputation, and launch a distinguished career in the law.

When his clients failed to appear in court, Chassenée resorted to procedural arguments. His first tactic was to invoke the notion of fair process, and specifically to challenge the original writ for having failed to give the rats due notice. The defendants, he pointed out, were dispersed over a large tract of countryside, and lived in many villages; a single summons was inadequate to notify them all. Moreover, the summons was addressed only to some of the rats of the diocese; but technically it should have been addressed to them all.

Chassenée was successful in his argument, and the court ordered a second summons to be read from the pulpit of every local parish church; this second summons now correctly addressed all the local rats, without exception.

But on the appointed day the rats again failed to appear. Chassenée now made a second argument. His clients, he reminded the court, were widely dispersed; they needed to make preparations for a great migration, and those preparations would take time. The court once again conceded the reasonableness of the argument, and granted a further delay in the proceedings. When the rats a third time failed to appear, Chassenée was ready with a third argument. The first two arguments had relied on the idea of procedural fairness; the third treated the rats as a class of persons who were entitled to equal treatment under the law. He addressed the court at length, and successfully demonstrated that, if a person is cited to appear at a place to which he cannot come in safety, he may lawfully refuse to obey the writ. And a journey to court would entail serious perils for his clients. They were notoriously unpopular in the region; and furthermore they were rightly afraid of their natural enemies, the cats. Moreover (he pointed out to the court) the cats could hardly be regarded as neutral in this dispute; for they belonged to the plaintiffs. He accordingly demanded that the plaintiffs be enjoined by the court, under the threat of severe penalties, to restrain their cats, and prevent them from frightening his clients. The court again found this argument compelling; but now the plaintiffs seem to have come to the end of their patience. They demurred to the motion; the court, unable to settle on the correct period within which the rats must appear, adjourned on the question sine die, and judgment for the rats was granted by default.

Most of us would assume at once that this is all nothing but an elaborate joke; but Ewald strongly argues that it was all quite serious. This would actually be worthy of its own post, but I will leave it aside for now. In any case it illustrates the existence of extremely different attitudes even a few centuries ago.

In any event, Robin continues:

New habits and attitudes result less than you think from moral progress, and more from people adapting to new situations. So many of your descendants’ strange habits and attitudes are likely to violate your concepts of moral progress; what they do may often seem wrong. Also, you likely won’t be able to easily categorize many future ways as either good or evil; they will instead just seem weird. After all, your world hardly fits the morality tales your distant ancestors told; to them you’d just seem weird. Complex realities frustrate simple summaries, and don’t fit simple morality tales.

Many people of a more conservative temperament, such as myself, might wish to swap out “moral progress” here with “moral regress,” but the point stands in any case. This is related to our discussions of the effects of technology and truth on culture, and of the idea of irreversible changes.

Robin finally gets to the point of his book:

This book presents a concrete and plausible yet troubling view of a future full of strange behaviors and attitudes. You may have seen concrete troubling future scenarios before in science fiction. But few of those scenarios are in fact plausible; their details usually make little sense to those with expert understanding. They were designed for entertainment, not realism.

Perhaps you were told that fictional scenarios are the best we can do. If so, I aim to show that you were told wrong. My method is simple. I will start with a particular very disruptive technology often foreseen in futurism and science fiction: brain emulations, in which brains are recorded, copied, and used to make artificial “robot” minds. I will then use standard theories from many physical, human, and social sciences to describe in detail what a world with that future technology would look like.

I may be wrong about some consequences of brain emulations, and I may misapply some science. Even so, the view I offer will still show just how troublingly strange the future can be.

I greatly enjoyed Robin’s book, but unfortunately I have to admit that relatively few people will in general. It is easy enough to see the reason for this from Robin’s introduction. Who would expect to be interested? Possibly those who enjoy the “futurism and science fiction” concerning brain emulations; but if Robin does what he set out to do, those persons will find themselves strangely uninterested. As he says, science fiction is “designed for entertainment, not realism,” while he is attempting to answer the question, “What would this actually be like?” This intention is very remote from the intention of the science fiction, and consequently it will likely appeal to different people.

Whether or not Robin gets the answer to this question right, he definitely succeeds in making his approach and appeal differ from those of science fiction.

One might illustrate this with almost any random passage from the book. Here are portions of his discussion of the climate of em cities:

As we will discuss in Chapter 18, Cities section, em cities are likely to be big, dense, highly cost-effective concentrations of computer and communication hardware. How might such cities interact with their surroundings?

Today, computer and communication hardware is known for being especially temperamental about its environment. Rooms and buildings designed to house such hardware tend to be climate-controlled to ensure stable and low values of temperature, humidity, vibration, dust, and electromagnetic field intensity. Such equipment housing protects it especially well from fire, flood, and security breaches.

The simple assumption is that, compared with our cities today, em cities will also be more climate-controlled to ensure stable and low values of temperature, humidity, vibrations, dust, and electromagnetic signals. These controls may in fact become city level utilities. Large sections of cities, and perhaps entire cities, may be covered, perhaps even domed, to control humidity, dust, and vibration, with city utilities working to absorb remaining pollutants. Emissions within cities may also be strictly controlled.

However, an em city may contain temperatures, pressures, vibrations, and chemical concentrations that are toxic to ordinary humans. If so, ordinary humans are excluded from most places in em cities for safety reasons. In addition, we will see in Chapter 18, Transport section, that many em city transport facilities are unlikely to be well matched to the needs of ordinary humans.

Cities today are the roughest known kind of terrain, in the sense that cities slow down the wind the most compared with other terrain types. Cities also tend to be hotter than neighboring areas. For example, Las Vegas is 7 ° Fahrenheit hotter in the summer than are surrounding areas. This hotter city effect makes ozone pollution worse and this effect is stronger for bigger cities, in the summer, at night, with fewer clouds, and with slower wind (Arnfield 2003).

This is a mild reason to expect em cities to be hotter than other areas, especially at night and in the summer. However, as em cities are packed full of computing hardware, we shall now see that em cities will  actually be much hotter.

While the book considers a wide variety of topics, e.g. the social relationships among ems, which look quite different from the above passage, the general mode of treatment is the same. As Robin put it, he uses “standard theories” to describe the em world, much as he employs standard theories about cities, about temperature and climate, and about computing hardware in the above passage.

One might object that basically Robin is positing a particular technological change (brain emulations), but then assuming that everything else is the same, and working from there. And there is some validity to this objection. But in the end there is actually no better way to try to predict the future; despite David Hume’s opinion, generally the best way to estimate the future is to say, “Things will be pretty much the same.”

At the end of the book, Robin describes various criticisms. First are those who simply said they weren’t interested: “If we include those who declined to read my draft, the most common complaint is probably ‘who cares?'” And indeed, that is what I would expect, since as Robin remarked himself, people are interested in an entertaining account of the future, not an attempt at a detailed description of what is likely.

Others, he says, “doubt that one can ever estimate the social consequences of technologies decades in advance.” This is basically the objection I mentioned above.

He lists one objection that I am partly in agreement with:

Many doubt that brain emulations will be our next huge technology change, and aren’t interested in analyses of the consequences of any big change except the one they personally consider most likely or interesting. Many of these people expect traditional artificial intelligence, that is, hand-coded software, to achieve broad human level abilities before brain emulations appear. I think that past rates of progress in coding smart software suggest that at previous rates it will take two to four centuries to achieve broad human level abilities via this route. These critics often point to exciting recent developments, such as advances in “deep learning,” that they think make prior trends irrelevant.

I don’t think Robin is necessarily mistaken in regard to his expectations about “traditional artificial intelligence,” although he may be, and I don’t find myself uninterested by default in things that I don’t think the most likely. But I do think that traditional artificial intelligence is more likely than his scenario of brain emulations; more on this below.

There are two other likely objections that Robin does not include in this list, although he does touch on them elsewhere. First, people are likely to say that the creation of ems would be immoral, even if it is possible, and similarly that the kinds of habits and lives that he describes would themselves be immoral. On the one hand, this should not be a criticism at all, since Robin can respond that he is simply describing what he thinks is likely, not saying whether it should happen or not; on the other hand, it is in fact obvious that Robin does not have much disapproval, if any, of his scenario. The book ends in fact by calling attention to this objection:

The analysis in this book suggests that lives in the next great era may be as different from our lives as our lives are from farmers’ lives, or farmers’ lives are from foragers’ lives. Many readers of this book, living industrial era lives and sharing industrial era values, may be disturbed to see a forecast of em era descendants with choices and life styles that appear to reject many of the values that they hold dear. Such readers may be tempted to fight to prevent the em future, perhaps preferring a continuation of the industrial era. Such readers may be correct that rejecting the em future holds them true to their core values.

But I advise such readers to first try hard to see this new era in some detail from the point of view of its typical residents. See what they enjoy and what fills them with pride, and listen to their criticisms of your era and values. This book has been designed in part to assist you in such a soul-searching examination. If after reading this book, you still feel compelled to disown your em descendants, I cannot say you are wrong. My job, first and foremost, has been to help you see your descendants clearly, warts and all.

Our own discussions of the flexibility of human morality are relevant. The creatures Robin is describing are in many ways quite different from humans, and it is in fact very appropriate for their morality to differ from human morality.

A second likely objection is that Robin’s ems are simply impossible, on account of the nature of the human mind. I think that this objection is mistaken, but I will leave the details of this explanation for another time. Robin appears to agree with Sean Carroll about the nature of the mind, as can be seen for example in this post. Robin is mistaken about this, for the reasons suggested in my discussion of Carroll’s position. Part of the problem is that Robin does not seem to understand the alternative. Here is a passage from the linked post on Overcoming Bias:

Now what I’ve said so far is usually accepted as uncontroversial, at least when applied to the usual parts of our world, such as rivers, cars, mountains laptops, or ants. But as soon as one claims that all this applies to human minds, suddenly it gets more controversial. People often state things like this:

“I am sure that I’m not just a collection of physical parts interacting, because I’m aware that I feel. I know that physical parts interacting just aren’t the kinds of things that can feel by themselves. So even though I have a physical body made of parts, and there are close correlations between my feelings and the states of my body parts, there must be something more than that to me (and others like me). So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care mainly about feelings, not physical parts interacting; we want to know what out there feels so we can know what to care about.”

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

There is a false dichotomy here, and it is the same one that C.S. Lewis falls into when he says, “Either we can know nothing or thought has reasons only, and no causes.” And in general it is like the error of the pre-Socratics, that if a thing has some principles which seem sufficient, it can have no other principles, failing to see that there are several kinds of cause, and each can be complete in its own way. And perhaps I am getting ahead of myself here, since I said this discussion would be for later, but the objection that Robin’s scenario is impossible is mistaken in exactly the same way, and for the same reason: people believe that if a “materialistic” explanation could be given of human behavior in the way that Robin describes, then people do not truly reason, make choices, and so on. But this is simply to adopt the other side of the false dichotomy, much like C.S. Lewis rejects the possibility of causes for our beliefs.

One final point. I mentioned above that I see Robin’s scenario as less plausible than traditional artificial intelligence. I agree with Tyler Cowen in this post. This present post is already long enough, so again I will leave a detailed explanation for another time, but I will remark that Robin and I have a bet on the question.

Wishful Thinking about Wishful Thinking

Cameron Harwick discusses an apparent relationship between “New Atheism” and group selection:

Richard Dawkins’ best-known scientific achievement is popularizing the theory of gene-level selection in his book The Selfish Gene. Gene-level selection stands apart from both traditional individual-level selection and group-level selection as an explanation for human cooperation. Steven Pinker, similarly, wrote a long article on the “false allure” of group selection and is an outspoken critic of the idea.

Dawkins and Pinker are also both New Atheists, whose characteristic feature is not only a disbelief in religious claims, but an intense hostility to religion in general. Dawkins is even better known for his popular books with titles like The God Delusion, and Pinker is a board member of the Freedom From Religion Foundation.

By contrast, David Sloan Wilson, a proponent of group selection but also an atheist, is much more conciliatory to the idea of religion: even if its factual claims are false, the institution is probably adaptive and beneficial.

Unrelated as these two questions might seem – the arcane scientific dispute on the validity of group selection, and one’s feelings toward religion – the two actually bear very strongly on one another in practice.

After some discussion of the scientific issue, Harwick explains the relationship he sees between these two questions:

Why would Pinker argue that human self-sacrifice isn’t genuine, contrary to introspection, everyday experience, and the consensus in cognitive science?

To admit group selection, for Pinker, is to admit the genuineness of human altruism. Barring some very strange argument, to admit the genuineness of human altruism is to admit the adaptiveness of genuine altruism and broad self-sacrifice. And to admit the adaptiveness of broad self-sacrifice is to admit the adaptiveness of those human institutions that coordinate and reinforce it – namely, religion!

By denying the conceptual validity of anything but gene-level selection, therefore, Pinker and Dawkins are able to brush aside the evidence on religion’s enabling role in the emergence of large-scale human cooperation, and conceive of it as merely the manipulation of the masses by a disingenuous and power-hungry elite – or, worse, a memetic virus that spreads itself to the detriment of its practicing hosts.

In this sense, the New Atheist’s fundamental axiom is irrepressibly religious: what is true must be useful, and what is false cannot be useful. But why should anyone familiar with evolutionary theory think this is the case?

As another example of the tendency Cameron Harwick is discussing, we can consider this post by Eliezer Yudkowsky:

Perhaps the real reason that evolutionary “just-so stories” got a bad name is that so many attempted stories are prima facie absurdities to serious students of the field.

As an example, consider a hypothesis I’ve heard a few times (though I didn’t manage to dig up an example).  The one says:  Where does religion come from?  It appears to be a human universal, and to have its own emotion backing it – the emotion of religious faith.  Religion often involves costly sacrifices, even in hunter-gatherer tribes – why does it persist?  What selection pressure could there possibly be for religion?

So, the one concludes, religion must have evolved because it bound tribes closer together, and enabled them to defeat other tribes that didn’t have religion.

This, of course, is a group selection argument – an individual sacrifice for a group benefit – and see the referenced posts if you’re not familiar with the math, simulations, and observations which show that group selection arguments are extremely difficult to make work.  For example, a 3% individual fitness sacrifice which doubles the fitness of the tribe will fail to rise to universality, even under unrealistically liberal assumptions, if the tribe size is as large as fifty.  Tribes would need to have no more than 5 members if the individual fitness cost were 10%.  You can see at a glance from the sex ratio in human births that, in humans, individual selection pressures overwhelmingly dominate group selection pressures.  This is an example of what I mean by prima facie absurdity.

It does not take much imagination to see that religion could have “evolved because it bound tribes closer together” without group selection in a technical sense having anything to do with this process. But I will not belabor this point, since Eliezer’s own answer regarding the origin of religion does not exactly keep his own feelings hidden:

So why religion, then?

Well, it might just be a side effect of our ability to do things like model other minds, which enables us to conceive of disembodied minds.  Faith, as an emotion, might just be co-opted hope.

But if faith is a true religious adaptation, I don’t see why it’s even puzzling what the selection pressure could have been.

Heretics were routinely burned alive just a few centuries ago.  Or stoned to death, or executed by whatever method local fashion demands.  Questioning the local gods is the notional crime for which Socrates was made to drink hemlock.

Conversely, Huckabee just won Iowa’s nomination for tribal-chieftain.

Why would you need to go anywhere near the accursèd territory of group selectionism in order to provide an evolutionary explanation for religious faith?  Aren’t the individual selection pressures obvious?

I don’t know whether to suppose that (1) people are mapping the question onto the “clash of civilizations” issue in current affairs, (2) people want to make religion out to have some kind of nicey-nice group benefit (though exterminating other tribes isn’t very nice), or (3) when people get evolutionary hypotheses wrong, they just naturally tend to get it wrong by postulating group selection.

Let me give my own extremely credible just-so story: Eliezer Yudkowsky wrote this not fundamentally to make a point about group selection, but because he hates religion, and cannot stand the idea that it might have some benefits. It is easy to see this from his use of language like “nicey-nice,” and his suggestion that the main selection pressure in favor of religion would be likely to be something like being burned at the stake, or that it might just have been a “side effect,” that is, that there was no advantage to it.

But as St. Paul says, “Therefore you have no excuse, whoever you are, when you judge others; for in passing judgment on another you condemn yourself, because you, the judge, are doing the very same things.” Yudkowsky believes that religion is just wishful thinking. But his belief that religion therefore cannot be useful is itself nothing but wishful thinking. In reality religion can be useful just as voluntary beliefs in general can be useful.

Semi-Parmenidean Heresy

In his book The Big Picture, Sean Carroll describes the view which he calls “poetic naturalism”:

As knowledge generally, and science in particular, have progressed over the centuries, our corresponding ontologies have evolved from quite rich to relatively sparse. To the ancients, it was reasonable to believe that there were all kinds of fundamentally different things in the world; in modern thought, we try to do more with less.

We would now say that Theseus’s ship is made of atoms, all of which are made of protons, neutrons, and electrons-exactly the same kinds of particles that make up every other ship, or for that matter make up you and me. There isn’t some primordial “shipness” of which Theseus’s is one particular example; there are simply arrangements of atoms, gradually changing over time.

That doesn’t mean we can’t talk about ships just because we understand that they are collections of atoms. It would be horrendously inconvenient if, anytime someone asked us a question about something happening in the world, we limited our allowable responses to a listing of a huge set of atoms and how they were arranged. If you listed about one atom per second, it would take more than a trillion times the current age of the universe to describe a ship like Theseus’s. Not really practical.

It just means that the notion of a ship is a derived category in our ontology, not a fundamental one. It is a useful way of talking about certain subsets of the basic stuff of the universe. We invent the concept of a ship because it is useful to us, not because it’s already there at the deepest level of reality. Is it the same ship after we’ve gradually replaced every plank? I don’t know. It’s up to us to decide. The very notion of “ship” is something we created for our own convenience.

That’s okay. The deepest level of reality is very important; but all the different ways we have of talking about that level are important too.

There is something essentially pre-Socratic about this thinking. When Carroll talks about “fundamentally different things,” he means things that differ according to their basic elements. But at the same kind the implication is that only things that differ in this way are “fundamentally” different in the sense of being truly or really different. But this is a quite different sense of “fundamental.”

I suggested in the linked post that even Thales might not really have believed that material causes alone sufficiently explained reality. Nonetheless, there was a focus on the material cause as being the truest explanation. We see the same focus here in Sean Carroll. When he says, “There isn’t some primordial shipness,” he is thinking of shipness as something that would have to be a material cause, if it existed.

Carroll proceeds to contrast his position with eliminativism:

One benefit of a rich ontology is that it’s easy to say what is “real”- every category describes something real. In a sparse ontology, that’s not so clear. Should we count only the underlying stuff of the world as real, and all the different ways we have of dividing it up and talking about it as merely illusions? That’s the most hard-core attitude we could take to reality, sometimes called eliminativism, since its adherents like nothing better than to go around eliminating this or that concept from our list of what is real. For an eliminativist, the question “Which Captian Kirk is the real one?” gets answered by, “Who cares? People are illusions. They’re just fictitious stories we tell about the one true world.”

I’m going to argue for a different view: our fundamental ontology, the best way we have of talking about the world at the deepest level, is extremely sparse. But many concepts that are part of non-fundamental ways we have of talking about the world- useful ideas describing higher-level, macroscopic reality- deserve to be called “real.”

The key word there is “useful.” There are certainly non-useful ways of talking about the world. In scientific contexts, we refer to such non-useful ways as “wrong” or “false.” A way of talking isn’t just a list of concepts; it will generally include a set of rules for using them, and relationships among them. Every scientific theory is a way of talking about the world, according to which we can say things like “There are things called planets, and something called the sun, all of which move through something called space, and planets do something called orbiting the sun, and those orbits describe a particular shape in space called an ellipse.” That’s basically Johannes Kepler’s theory of planetary motion, developed after Copernicus argued for the sun being at the center of the solar system but before Isaac Newton explained it all in terms of the force of gravity. Today, we would say that Kepler’s theory is fairly useful in certain circumstances, but it’s not as useful as Newton’s, which in turn isn’t as broadly useful as Einstein’s general theory of relativity.

A poetic naturalist will agree that both Captain Kirk and the Ship of Theseus are simply ways of talking about certain collections of atoms stretching through space and time. The difference is that an eliminativist will say “and therefore they are just illusions,” while the poetic naturalist says “but they are no less real for all of that.”

There are some good things about what Carroll is doing here. He is right of course to insist that the things of common experience are “real.” He is also right to see some relationship between saying that something is real and saying that talking about it is useful, but this is certainly worth additional consideration, and he does not really do it justice.

The problematic part is that, on account of his pre-Socratic tendencies, he is falling somewhat into the error of Parmenides. The error of Parmenides was to suppose that being can be, and can be thought and said, in only one way. Carroll, on account of confusing the various meanings of “fundamental,” supposes that being can be in only one way, namely as something elemental, but that it can be thought and said in many ways.

The problem with this, apart from the falsity of asserting that being can be in only one way, is that no metaphysical account is given whereby it would be reasonable to say that being can be thought and said in many ways, given that it can be in only one way. Carroll is trying to point in that direction by saying that our common speech is useful, so it must be about real things; but the eliminativist would respond, “Useful to whom? The things that you are saying this is useful for are illusions and do not exist. So even your supposed usefulness does not exist.” And Carroll will have no valid response, because he has already admitted to agreeing with the eliminativist on a metaphysical level.

The correct answer to this is the one given by Aristotle. Material causes do not sufficiently explain reality, but other causes are necessary as well. But this means that the eliminativist is mistaken on a metaphysical level, not merely in his way of speaking.

Parmenides the Eliminativist

While the name “eliminativism” is used particularly with respect to the denial of the reality of consciousness or various mental conditions, we could define it more generally as the tendency to explain something away rather than explaining it. The motive for this would be that someone believes that reality does not have the principles needed in order to explain the thing; so it is necessary for them to explain it away instead. In this way we noted that Daniel Dennett denies the existence of consciousness. Since every being is objective, in his view, reality does not have any principle which could explain something subjective. Therefore it is necessary for him to explain away subjectivity.

If we take eliminativism in this general way, it will turn out that Parmenides is the ultimate eliminativist. According to Parmenides, not only is there nothing but being, but nothing can be distinct from being in any way, even in concept. Thus anything which appears to be conceptually distinct from being, including ourselves and all the objects of our common experience, is nothing but an illusion deluding itself. And ultimately nothing at all, since even illusions cannot be something other than being.

Parmenides comes to this conclusion in the same general way as Dennett, namely because it seems to him that reality cannot have any principle which could explain things as they are. It is evident that there cannot be anything besides being; thus if something seems distinct from being in any way, there is no principle capable of explaining it.

Descartes argued that he can know he exists since he thinks. On the contrary , Parmenides responds: you think, but thinking means something different from being; therefore you are not.