Statistical Laws of Choice

I noted in an earlier post the necessity of statistical laws of nature. This will necessarily apply to human actions as a particular case, as I implied there in mentioning the amount of food humans eat in a year.

Someone might object. It was said in the earlier post that this will happen unless there is a deliberate attempt to evade this result. But since we are speaking of human beings, there might well be such an attempt. So for example if we ask someone to choose to raise their right hand or their left hand, this might converge to an average, such as 50% each, or perhaps the right hand 60% of the time, or something of this kind. But presumably someone who starts out with the deliberate intention of avoiding such an average will be able to do so.

Unfortunately, such an attempt may succeed in the short run, but will necessarily fail in the long run, because although it is possible in principle, it would require an infinite knowing power, which humans do not have. As I pointed out in the earlier discussion, attempting to prevent convergence requires longer and longer strings on one side or the other. But if you need to raise your right hand a few trillion times before switching again to your left, you will surely lose track of your situation. Nor can you remedy this by writing things down, or by other technical aids: you may succeed in doing things trillions of times with this method, but if you do it forever, the numbers will also become too large to write down. Naturally, at this point we are only making a theoretical point, but it is nonetheless an important one, as we shall see later.

In any case, in practice people do not tend even to make such attempts, and consequently it is far easier to predict their actions in a roughly statistical manner. Thus for example it would not be hard to discover the frequency with which an individual chooses chocolate ice cream over vanilla.

Form and Reality

In a very interesting post Alexander Pruss discusses realism and skeptical scenarios:

The ordinary sentence “There are four chairs in my office” is true (in its ordinary context). Furthermore, its being true tells us very little about fundamental ontology. Fundamental physical reality could be made out of a single field, a handful of fields, particles in three-dimensional space, particles in ten-dimensional space, a single vector in a Hilbert space, etc., and yet the sentence could be true.

An interesting consequence: Even if in fact physical reality is made out of particles in three-dimensional space, we should not analyze the sentence to mean that there are four disjoint pluralities of particles each arranged chairwise in my office. For if that were what the sentence meant, it would tell us about which of the fundamental physical ontologies is correct. Rather, the sentence is true because of a certain arrangement of particles (or fields or whatever).

If there is such a broad range of fundamental ontologies that “There are four chairs in my office” is compatible with, it seems that the sentence should also be compatible with various sceptical scenarios, such as that I am a brain in a vat being fed data from a computer simulation. In that case, the chair sentence would be true due to facts about the computer simulation, in much the way that “There are four chairs in this Minecraft house” is true. It would be very difficult to be open to a wide variety of fundamental physics stories about the chair sentence without being open to the sentence being true in virtue of facts about a computer simulation.

But now suppose that the same kind of thing is true for other sentences about physical things like tables, dogs, trees, human bodies, etc.: each of these sentences can be made true by a wide array of physical ontologies. Then it seems that nothing we say about physical things rules out sceptical scenarios: yes, I know I have two hands, but my having two hands could be grounded by facts about a computer simulation. At this point the meaningfulness of the sceptical question whether I know I am not a brain in a vat is breaking down. And with it, realism is breaking down.

I am not completely sure what Pruss means by “realism is breaking down,” but he is looking at something important here. One question that needs to be addressed, however, is what counts as a skeptical scenario in the first place. In the rest of the post, Pruss makes an interesting suggestion about this:

In order for the sceptical question to make sense, we need the possibility of saying things that cannot simply be made true by a very wide variety of physical theories, since such things will also be made true by computer simulations. This gives us an interesting anti-reductionist argument. If the statement “I have two hands” is to be understood reductively (and I include non-Aristotelian functionalist views as reductive), then it could still be literally true in the brain-in-a-vat scenario. But if anti-reductionism about hands is true, then the statement wouldn’t be true in the brain-in-a-vat scenario. And so I can deny that I am in that scenario simply by saying “I have two hands.”

But maybe I am moving too fast here. Maybe “I have two hands” could be literally true in a brain-in-a-vat scenario. Suppose that the anti-reductionism consists of there being Aristotelian forms of hands (presumably accidental forms). But if, for all we know, the form of a hand can inform a bunch of particles, a fact about a vector or the region of a field, then the form of a hand can also inform an aspect of a computer simulation. And so, for all we know, I can literally and non-reductively have hands even if I am a brain in a vat. I am not sure, however, that I need to worry about this. What is important is form, not the precise material substrate. If physical reality is the memory of a giant computer but it isn’t a mere simulation but is in fact informed by a multiplicity of substantial and accidental forms corresponding to people, trees, hands, hearts, etc., and these forms are real entities, then the scenario does not seem to me to be a sceptical scenario.

A skeptical scenario, according to Pruss, is a situation where the things we normally talk about do not have forms. If they do have forms, we are not in a skeptical scenario at all, even if in some sense we are in a computer simulation or even if someone is a brain in a vat. On the face of it this seems a very odd claim: “form” seems to be a technical philosophical explanation, while asking if we are in a skeptical scenario seems to be asking if our everyday common sense understanding of things is mistaken.

Nonetheless, there is a lot of truth in his explanation. First let us consider what is meant by a skeptical scenario in the first place. In terms of his example, it is supposed to go something like this: “Is it possible that you are a brain in a vat without realizing it? If so, then almost everything you believe is false, since you do not have hands, the people you speak to are not real, and so on.”

In the post Pruss is pointing out a problem with the skeptical question. The skeptical question is like a skeptic in the remote past asking, “Is it possible that the earth is spinning without us realizing it? If so, then our everyday opinion that the sun rises every morning is false, since the sun does not move.”

The response to the second skeptic is evident: our everyday opinion that the sun rises every morning is not false, not even if the earth turns out to be spinning, because “the sun rises every morning,” is to be understood in whatever way is needed in order for it to be true. It refers to what happens every morning, whatever that actually happens to be.

Pruss is pointing out that we can answer the first question in the same way: our everyday opinion that we have hands is not false, not even if we are in a computer simulation or in a vat, because “I have two hands” is to be understood in whatever way is needed in order for it to be true. It refers to these two things in front of me right now, whatever they actually are.

Let’s suppose the skeptic tries to come up with a response. He might say, “Look, computer programs do not have hands, and brains do not have hands. So if you are a computer program or a brain in a vat, then you just do not have hands, period. So those scenarios do indeed mean that your common understanding would be false.”

It is certainly true that according to our common understanding, brains in vats do not have hands. So there is a tension here: the argument that it would be true to say we have hands even in that situation seems like a good argument, but so does the argument that it would be false that we have hands.

The answer to the difficulty is that we need to consider the meaning of “I am a brain in a vat.” Just as the word “hands” should refer to these two things in front of me, whatever they are, so the word “brain” refers to things inside of people’s heads, whatever they are, and the word “vat” refers to other things we sometimes experience in real life, at occasionally, or something very like them. But this means that just as “I have two hands” is to be understood in whatever way is needed to make it true, so also “I am not a brain in a vat,” is understood in whatever way is needed to make it true.

This means that correct answer to the original question was simply, “No, it isn’t possible that I will turn out to be a brain in a vat, regardless of any later discoveries, and it isn’t possible that the sun will turn out not to rise, regardless of discoveries about the motion of the sun and of the earth.”

The skeptic will want to insist. Surely events like those of The Matrix are at least conceivable. And if some such situation turned out to be true, then wasn’t it true that you were in a skeptical scenario and that your beliefs about hands and brains and vats were all false, and especially would it not be the case that your belief that you weren’t in a situation like that was false?

The correct answer, again, is that your original beliefs were not false. But in view of your new knowledge of the world, you might well want to adopt a new mode of speaking, and say things that would sound opposed to your original beliefs. They would not be opposed, however, but would simply be speaking about things you did not originally speak about.

Note however that “your belief that you weren’t in a situation like that” could now be taken in two ways. It could mean my belief that I am not a brain in a vat, and this belief will never turn out to have been false. Or it could mean a belief that there is not some larger view of reality where “he was a brain in a vat” would be a reasonable description, in the way that someone coming out of the Matrix would acquire a larger view. In reality I have the latter belief as well, as I consider it improbable that any intelligent beings would behave in such a way as to make that scenario probable. But I don’t think it is impossible for this belief to be falsified; and if it were, I would not say that my previous common sense beliefs had been false. This corresponds to what Pruss says at the end of his post, where he says that as long as things have forms, it is not really a skeptical scenario, even if in some sense he is in a computer simulation or whatever.

Why the insistence on form? This is related to what we called the Semi-Parmenidean Heresy. There we discussed Sean Carroll’s view, and noted that his position in essence is this: Metaphysically, the eliminativists are right. But it is useful to talk as though they are wrong, so we’re going to talk as though they are wrong, and even say they are wrong, by saying that common sense things are real.

This is ultimately incoherent: if the eliminativists are mistaken, they are mistaken in their metaphysics, since the position is just a certain metaphysical position.

It is not difficult to see the connection. According to a strict eliminativist, it would be literally true that we do not have hands, because there is no such thing as “we” or as “hands” in the first place. There are just fundamental particles. In other words, eliminativism would be even more of a skeptical scenario than the Matrix; the Matrix would not imply that your common sense beliefs are false, while eliminativism simply says that all of your beliefs are false, including your belief that you have beliefs.

And on the other hand, no scenario will be truly skeptical, even one like the Matrix, if it admits that our common sense beliefs are true. And as I said at the end of the post on Carroll’s view, this requires a metaphysics that allows those beliefs to be true, and this requires formal causes.

Alexander Pruss, however, seems to me to interpret this in a rather narrow way in his concluding remark:

If physical reality is the memory of a giant computer but it isn’t a mere simulation but is in fact informed by a multiplicity of substantial and accidental forms corresponding to people, trees, hands, hearts, etc., and these forms are real entities, then the scenario does not seem to me to be a sceptical scenario.

It is not clear what it means to be “real entities” rather than being unreal, given that you acknowledge them in the first place, and it isn’t clear to me what he means by a “mere simulation.” But this sounds a lot to me like, “If the world isn’t Aristotelian, understood in a very narrow way, then that would be a skeptical scenario.” This seems to me a kind of stubbornness much like that of James Larson. Disagreeing with you is not a war against being, and believing that your account of form and matter didn’t get every detail right, is not saying that our common sense beliefs are not true.

As an illustration of the narrowness in question, consider Pruss’s position on artifacts:

Suppose I am a plumber, and I take a section of pipe, insert a blowgun dart, and blow.  I just shot a dart out of a blowgun.  When did the pipe turn into a blowgun, though?

Did it happen when I formed the intention to use the pipe as a blowgun?  No: I do not have the power to make new material objects come into existence just by thinking about it.

When I picked up the pipe?  There are at least there is contact.  But surely it’s not the right kind of contact.  It would be magic if I could make a new material object come into existence by just picking up a material object with a certain thought in mind.

When I inserted the dart?  Presumably, not any insertion will do, but one with a plan to blow.  For I could just be doing plumbing, using the outer diameter of the dart to measure the inner diameter of the pipe, and that shouldn’t turn the pipe into a dart.  Again, we have some magic here–thinking about the pipe in one way while inserting the dart creates a blowgun while thinking about it another way leaves it a boring pipe.  Moreover, putting the dart into the pipe seems to be an instance of loading a blowgun rather than making a blowgun.

The solution to all this is to deny that there are pipes and blowguns.  There is just matter (or fields) arranged pipewise and blowgunwise.  And for convenience we adopt ways of speaking that make it sound like such objects are among the furniture of the universe.

Pruss is not simply putting out a position for discussion; this is what he believes to be true, as is easily confirmed elsewhere on his blog. Note that he is falling into the Semi-Parmenidean heresy here, except that he is even going farther than Carroll, and suggesting that “there are no pipes and blowguns” is a true statement, which Carroll would rightly deny. In this way Pruss is almost a pure eliminativist about artifacts. (He does also speak elsewhere more in the manner of Sean Carroll about them.)

To the degree that he is eliminativist about artifacts, he contradicts common sense in the same kind of way that someone contradicts common sense who says, “You do not have hands.” He just contradicts it about different things. And why about these things, and not others? I suggest that it is because under the ordinary Aristotelian account, it is likely that a man or a horse has a substantial form, but unlikely that a pipe has one. And although a pipe would have various accidental forms, the idea of a unified form of “pipeness” seems pretty unlikely. If this is actually his reason or part of it, then he is identifying skepticism with disagreeing with his philosophical opinions, even though his own opinions actually contain the skepticism: namely, disagreement with common sense.

My own response to this question would be different: being is said in many ways, and consequently also form and unity. And I reject any disagreement with common sense: men and horses are real, but so also are pipes. If I am not mistaken, all of these will have being and form in the way that is appropriate to them.

Mind and Matter

In Book III of On the Soul, Aristotle argues that the intellect does not have a bodily organ:

Therefore, since everything is a possible object of thought, mind in order, as Anaxagoras says, to dominate, that is, to know, must be pure from all admixture; for the co-presence of what is alien to its nature is a hindrance and a block: it follows that it too, like the sensitive part, can have no nature of its own, other than that of having a certain capacity. Thus that in the soul which is called mind (by mind I mean that whereby the soul thinks and judges) is, before it thinks, not actually any real thing. For this reason it cannot reasonably be regarded as blended with the body: if so, it would acquire some quality, e.g. warmth or cold, or even have an organ like the sensitive faculty: as it is, it has none. It was a good idea to call the soul ‘the place of forms’, though (1) this description holds only of the intellective soul, and (2) even this is the forms only potentially, not actually.
Observation of the sense-organs and their employment reveals a distinction between the impassibility of the sensitive and that of the intellective faculty. After strong stimulation of a sense we are less able to exercise it than before, as e.g. in the case of a loud sound we cannot hear easily immediately after, or in the case of a bright colour or a powerful odour we cannot see or smell, but in the case of mind thought about an object that is highly intelligible renders it more and not less able afterwards to think objects that are less intelligible: the reason is that while the faculty of sensation is dependent upon the body, mind is separable from it.

There are two arguments here, one from the fact that the mind can understand at all, and the other from the effect of thinking about highly intelligible things.

St. Thomas explains the first argument:

The following argument may make this point clear. Anything that is in potency with respect to an object, and able to receive it into itself, is, as such, without that object; thus the pupil of the eye, being potential to colours and able to receive them, is itself colourless. But our intellect is so related to the objects it understands that it is in potency with respect to them, and capable of being affected by them (as sense is related to sensible objects). Therefore it must itself lack all those things which of its nature it understands. Since then it naturally understands all sensible and bodily things, it must be lacking in every bodily nature; just as the sense of sight, being able to know colour, lacks all colour. If sight itself had any particular colour, this colour would prevent it from seeing other colours, just as the tongue of a feverish man, being coated with a bitter moisture, cannot taste anything sweet. In the same way then, if the intellect were restricted to any particular nature, this connatural restriction would prevent it from knowing other natures. Hence he says: ‘What appeared inwardly would prevent and impede’ (its knowledge of) ‘what was without’; i.e. it would get in the way of the intellect, and veil it so to say, and prevent it from inspecting other things. He calls ‘the inwardly appearing’ whatever might be supposed to be intrinsic and co-natural to the intellect and which, so long as it ‘appeared’ therein would necessarily prevent the understanding of anything else; rather as we might say that the bitter moisture was an ‘inwardly appearing’ factor in a fevered tongue.

This is similar to St. Thomas’s suggestion elsewhere that matter and understanding are intrinsically opposed to one another. I cautioned the reader there about taking such an argument as definitive too quickly, and I would do the same here. Consider the argument about sensation: it is true enough that the pupil isn’t colored, and that perception of temperature is relative to the temperature of the organ of touch, or some aspects of it, which suggests that heat in the organ impedes the sensation of heat. On the other hand, the optic nerve and the visual cortex are arguably even more necessary to the sense of sight than the pupil, and they most certainly are not colorless. Taking this into consideration, the facts about the pupil, and the way touch functions, and so on, seem like facts that should be taken into consideration, but do not even come to close to establishing as a fact that the intellect does not have an organ.

Likewise, with the second argument, Aristotle is certainly pointing to a difference between the intellect and the senses, even if this argument might need qualification, since one does tire even of thinking. But saying that the intellect is not merely another sense is one thing, and saying that it does not have an organ at all is another.

We previously considered Sean Collins’s discussion Aristotle and the history of science. Following on one of the passages quoted in the linked post, Collins continues:

I said above that Aristotle thinks somewhat Platonically “despite himself.” He himself is very remarkably aware that matter will make a difference in the account of things, even if the extent of the difference remains as yet unknown. And Aristotle makes, in this connection, a distinction which is well known to the scholastic tradition, but not equally well understood: that, namely, between the “logical” consideration of a question, and the “physical” consideration of it. Why make that distinction? Its basis lies in the discovery that matter is a genuine principle. For, on the one hand, the mind and its act are immaterial; but the things to be known in the physical world are material. It becomes necessary, therefore, for the mind to “go out of itself,” as it were, in the effort to know things. This is precisely what gives rise to what is called the “order of concretion.”

But how much “going out of itself” will be necessary, or precisely how that is to be done, is not something that can be known without experience — the experience, as it turns out, not merely of an individual but of an entire tradition of thought. Here I am speaking of history, and history has, indeed, everything to do with what I am talking about. Aristotle’s disciples are not always as perspicacious as their master was. Some of them suppose that they should follow the master blindly in the supposition that history has no significant bearing on the “disciplines.” That supposition amounts, at least implicitly, to a still deeper assumption: the assumption, namely, that the materiality of human nature, and of the cosmos, is not so significant as to warrant a suspicion that historical time is implicated in the material essence of things. Aristotle did not think of time as essentially historical in the sense I am speaking of here. The discovery that it was essentially historical was not yet attainable.

I would argue that Sean Collins should consider how similar considerations would apply to his remark that “the mind and its act are immaterial.” Perhaps we know in a general way that sensation is more immaterial than growth, but we do not think that sensation therefore does not involve an organ. How confident should one be that the mind does not use an organ based on such general considerations? Just as there is a difference between the “logical” consideration of time and motion and their “physical” consideration, so there might be a similar difference between two kinds of consideration of the mind.

Elsewhere, Collins criticizes a certain kind of criticism of science:

We do encounter the atomists, who argue to a certain complexity in material things. Most of our sophomore year’s natural science is taken up with them. But what do we do with them? The only atomists we read are the early ones, who are only just beginning to discover evidence for atoms. The evidence they possess for atoms is still weak enough so that we often think we can take refuge in general statements about the hypothetical nature of modern science. In other words, without much consideration, we are tempted to write modern science off, so that we can get back to this thing we call philosophy.

Some may find that description a little stark, but at any rate, right here at the start, I want to note parenthetically that such a dismissal would be far less likely if we did not often confuse experimental science with the most common philosophical account of contemporary science. That most common philosophical account is based largely on the very early and incomplete developments of science, along with an offshoot of Humean philosophy which came into vogue mainly through Ernst Mach. But if we look at contemporary science as it really is today, and take care to set aside accidental associations it has with various dubious philosophies, we find a completely wonderful and astonishing growth of understanding of the physical structure not only of material substances, but of the entire cosmos. And so while some of us discuss at the lunch table whether the hypothesis of atoms is viable, physicists and engineers around the world make nanotubes and other lovely little structures, even machines, out of actual atoms of various elements such as carbon.

And likewise during such discussions, neuroscientists discuss which parts of the brain are responsible for abstract thought.

When we discussed the mixing of wine and water, we noted how many difficulties could arise when you consider a process in detail, which you might not notice simply with a general consideration. The same thing will certainly happen in the consideration of how the mind works. For example, how am I choosing these words as I type? I do not have the time to consider a vast list of alternatives for each word, even though there would frequently be several possibilities, and sometimes I do think of more than one. Other times I go back and change a word or two, or more. But most of the words are coming to me as though by magic, without any conscious thought. Where is this coming from?

The selection of these words is almost certainly being done by a part of my brain. A sign of this is that those with transcortical motor aphasia have great difficulty selecting words, but do not have a problem with understanding.

This is only one small element of a vast interconnected process which is involved in understanding, thinking, and speaking. And precisely because there is a very complex process here which is not completely understood, the statement, “well, these elements are organic, but there is also some non-organic element involved,” cannot be proved to be false in a scientific manner, at least at this time. But it also cannot be proved to be true, and if it did turn out to be true, there would have to be concrete relationships between that element and all the other elements. What would be the contribution of the immaterial element? What would happen if it were lacking, or if that question does not make sense, because it cannot be lacking, why can it not be lacking?

 

The Practical Argument for Free Will

Richard Chappell discusses a practical argument for free will:

1) If I don’t have free will, then I can’t choose what to believe.
2) If I can choose what to believe, then I have free will [from 1]
3) If I have free will, then I ought to believe it.
4) If I can choose what to believe, then I ought to believe that I have free will. [from 2,3]
5) I ought, if I can, to choose to believe that I have free will. [restatement of 4]

He remarks in the comments:

I’m taking it as analytic (true by definition) that choice requires free will. If we’re not free, then we can’t choose, can we? We might “reach a conclusion”, much like a computer program does, but we couldn’t choose it.

I understand the word “choice” a bit differently, in that I would say that we are obviously choosing in the ordinary sense of the term, if we consider two options which are possible to us as far as we know, and then make up our minds to do one of them, even if it turned out in some metaphysical sense that we were already guaranteed in advance to do that one. Or in other words, Chappell is discussing determinism vs libertarian free will, apparently ruling out compatibilist free will on linguistic grounds. I don’t merely disagree in the sense that I use language differently, but in the sense that I don’t agree that his usage correspond to the normal English usage. [N.B. I misunderstood Richard here. He explains in the comments.] Since people can easily be led astray by such linguistic confusions, given the relationships between thought and language, I prefer to reformulate the argument:

  1. If I don’t have libertarian free will, then I can’t make an ultimate difference in what I believe that was not determined by some initial conditions.
  2. If I can make an ultimate difference in what I believe that was not determined by some initial conditions, then I have libertarian free will [from 1].
  3. If I have libertarian free will, then it is good to believe that I have it.
  4. If I can make an ultimate difference in my beliefs undetermined by initial conditions, then it is good to believe that I have libertarian free will. [from 2, 3]
  5. It is good, if I can, to make a difference in my beliefs undetermined by initial conditions, such that I believe that I have libertarian free will.

We would have to add that the means that can make such a difference, if any means can, would be choosing to believe that I have libertarian free will.

I have reformulated (3) to speak of what is good, rather than of what one ought to believe, for several reasons. First, in order to avoid confusion about the meaning of “ought”. Second, because the resolution of the argument lies here.

The argument is in fact a good argument as far as it goes. It does give a practical reason to hold the voluntary belief that one has libertarian free will. The problem is that it does not establish that it is better overall to hold this belief, because various factors can contribute to whether an action or belief is a good thing.

We can see this with the following thought experiment:

Either people have libertarian free will or they do not. This is unknown. But God has decreed that people who believe that they have libertarian free will go to hell for eternity, while people who believe that they do not, will go to heaven for eternity.

This is basically like the story of the Alien Implant. Having libertarian free will is like the situation where the black box is predicting your choice, and not having it is like the case where the box is causing your choice. The better thing here is to believe that you do not have libertarian free will, and this is true despite whatever theoretical sense you might have that you are “not responsible” for this belief if it is true, just as it is better not to smoke even if you think that your choice is being caused.

But note that if a person believes that he has libertarian free will, and it turns out to be true, he has some benefit from this, namely the truth. But the evil of going to hell presumably outweighs this benefit. And this reveals the fundamental problem with the argument, namely that we need to weigh the consequences overall. We made the consequences heaven and hell for dramatic effect, but even in the original situation, believing that you have libertarian free will when you do not, has an evil effect, namely believing something false, and potentially many evil effects, namely whatever else follows from this falsehood. This means that in order to determine what is better to believe here, it is necessary to consider the consequences of being mistaken, just as it is in general when one formulates beliefs.

Alien Implant: Newcomb’s Smoking Lesion

In an alternate universe, on an alternate earth, all smokers, and only smokers, get brain cancer. Everyone enjoys smoking, but many resist the temptation to smoke, in order to avoid getting cancer. For a long time, however, there was no known cause of the link between smoking and cancer.

Twenty years ago, autopsies revealed tiny black boxes implanted in the brains of dead persons, connected to their brains by means of intricate wiring. The source and function of the boxes and of the wiring, however, remains unknown. There is a dial on the outside of the boxes, pointing to one of two positions.

Scientists now know that these black boxes are universal: every human being has one. And in those humans who smoke and get cancer, in every case, the dial turns out to be pointing to the first position. Likewise, in those humans who do not smoke or get cancer, in every case, the dial turns out to be pointing to the second position.

It turns out that when the dial points to the first position, the black box releases dangerous chemicals into the brain which cause brain cancer.

Scientists first formed the reasonable hypothesis that smoking causes the dial to be set to the first position. Ten years ago, however, this hypothesis was definitively disproved. It is now known with certainty that the box is present, and the dial pointing to its position, well before a person ever makes a decision about smoking. Attempts to read the state of the dial during a person’s lifetime, however, result most unfortunately in an explosion of the equipment involved, and the gruesome death of the person.

Some believe that the black box must be reading information from the brain, and predicting a person’s choice. “This is Newcomb’s Problem,” they say. These persons choose not to smoke, and they do not get cancer. Their dials turn out to be set to the second position.

Others believe that such a prediction ability is unlikely. The black box is writing information into the brain, they believe, and causing a person’s choice. “This is literally the Smoking Lesion,” they say.  Accepting Andy Egan’s conclusion that one should smoke in such cases, these persons choose to smoke, and they die of cancer. Their dials turn out to be set to the first position.

Still others, more perceptive, note that the argument about prediction or causality is utterly irrelevant for all practical purposes. “The ritual of cognition is irrelevant,” they say. “What matters is winning.” Like the first group, these choose not to smoke, and they do not get cancer. Their dials, naturally, turn out to be set to the second position.

 

Chastek on Determinism

On a number of occasions, James Chastek has referred to the impossibility of a detailed prediction of the future as an argument for libertarian free will. This is a misunderstanding. It is impossible to predict the future in detail for the reasons given in the linked post, and this has nothing to do with libertarian free will or even any kind of free will at all.

The most recent discussions of this issue at Chastek’s blog are found here and here. The latter post:

Hypothesis: A Laplacian demon, i.e. a being who can correctly predict all future actions, contradicts our actual experience of following instructions with some failure rate.

Set up: You are in a room with two buttons, A and B. This is the same set-up Soon’s free-will experiment, but the instructions are different.

Instructions: You are told that you will have to push a button every 30 seconds, and that you will have fifty trials. The clock will start when a sheet of paper comes out of a slit in the wall that says A or B. Your instructions are to push the opposite of whatever letter comes out.

The Apparatus: the first set of fifty trials is with a random letter generator. The second set of trials is with letters generated by a Laplacian demon who knows the wave function of the universe and so knows in advance what button will be pushed and so prints out the letter.

The Results: In the first set of trials, which we can confirm with actual experience, the success rate is close to 100%, but, the world being what it is, there is a 2% mistake rate in the responses. In the second set of trials the success rate is necessarily 0%. In the first set of trials, subject report feelings of boredom, mild indifference, continual daydreaming, etc. The feelings expressed in the second trial might be any or all of the following: some say they suddenly developed a pathological desire to subvert the commands of the experiment, others express feelings of being alienated from their bodies, trying to press one button and having their hand fly in the other direction, others insist that they did follow instructions and consider you completely crazy for suggesting otherwise, even though you can point to video evidence of them failing to follow the rules of the experiment, etc.

The Third Trial: Run the trial a third time, this time giving the randomly generated letter to the subject and giving the Laplacian letter to the experimenter. Observe all the trials where the two generate the same number, and interate the experiment until one has fifty trials. Our actual experience tells us that the subject will have a 98% success rate, but our theoretical Laplacian demon tells us that the success rate should be necessarily 0%. Since asserting that the random-number generator and the demon will never have the same response would make the error-rate necessarily disappear and cannot explain our actual experience of failures, the theoretical postulation of a Laplacian demon contradicts our actual experience. Q.E.D.

The post is phrased as a proof that Laplacian demons cannot exist, but in fact Chastek intends it to establish the existence of libertarian free will, which is a quite separate thesis; no one would be surprised if Laplacian demons cannot exist in the real world, but many people would be surprised if people turn out to have libertarian free will.

I explain in the comments there the problem with this argument:

Here is what happens when you set up the experiment. You approach the Laplacian demon and ask him to write the letter that the person is going to choose for the second set of 50 trials.

The demon will respond, “That is impossible. I know the wave function of the universe, and I know that there is no possible set of As and Bs such that, if that is the set written, it will be the set chosen by the person. Of course, I know what will actually be written, and I know what the person will do. But I also know that those do not and cannot match.”

In other words, you are right that the experiment is impossible, but this is not reason to believe that Laplacian demons are impossible; it is a reason to believe that it is impossible for anything to write what the person is going to do.

E.g. if your argument works, it proves either that God does not exist, or that he does not know the future. Nor can one object that God’s knowledge is eternal rather than of the future, since it is enough if God can write down what is going to happen, as he is thought to have done e.g. in the text, “A virgin will conceive etc.”

If you answer, as you should, that God cannot write what the person will do, but he can know it, the same applies to the Laplacian demon.

As another reality check here, according to St. Thomas a dog is “determinate to one” such that in the same circumstances it will do the same thing. But we can easily train a dog in such a way that no one can possibly write down the levers it will choose, since it will be trained to choose the opposite ones.

And still another: a relatively simple robot, programmed in the same way. We don’t need a Laplacian demon, since we can predict ourselves in every circumstance what it will do. But we cannot write that down, since then we would predict the opposite of what we wrote. And it is absolutely irrelevant that the robot is an “instrument,” since the argument does not have any premise saying that human beings are not instruments.

As for the third set, if I understood it correctly you are indeed cherry picking — you are simply selecting the trials where the human made a mistake, and saying, “why did he consistently make a mistake in these cases?” There is no reason; you simply selected those cases.

Chastek responds to this comment in a fairly detailed way. Rather than responding directly to the comment there, I ask him to comment on several scenarios. The first scenario:

If I drop a ball on a table, and I ask you to predict where it is going to first hit the table, and say, “Please predict where it is going to first hit the table, and let me know your prediction by covering the spot with your hand and keeping it there until the trial is over,” is it clear to you that:

a) it will be impossible for you to predict where it is going to first hit in this way, since if you cover a spot it cannot hit there

and

b) this has nothing whatsoever to do with determinism or indeterminism of anything.

The second scenario:

Let’s make up a deterministic universe. It has no human beings, no rocks, nothing but numbers. The wave function of the universe is this: f(x)=x+1, where x is the initial condition and x+1 is the second condition.

We are personally Laplacian demons compared to this universe. We know what the second condition will be for any original condition.

Now give us the option of setting the original condition, and say:

Predict the second condition, and set that as the initial condition. This should lead to a result like (1,1) or (2,2), which contradicts our experience that the result is always higher than the original condition. So the hypothesis that we know the output given the input must be false.

The answer: No. It is not false that we know the output given the input. We know that these do not and cannot match, not because of anything indeterminate, but because the universe is based on the completely deterministic rule that f(x)=x+1, not f(x)=x.

Is it clear:

a) why a Laplacian demon cannot set the original condition to the resulting condition
b) this has nothing to do with anything being indeterminate
c) there is no absurdity in a Laplacian demon for a universe like this

The reason why I presented these questions instead of responding directly to his comments is that his comments are confused, and an understanding of these situations would clear up that confusion. For unclear reasons, Chastek failed to respond to these questions. Nonetheless, I will respond to his detailed comments in the light of the above explanations. Chastek begins:

Here are my responses:

That is impossible… I know what will actually be written, and I know what the person will do. But I also know that those do not and cannot match

But “what will actually be written” is, together with a snapshot of the rest of the universe, an initial condition and “what the person will do” is an outcome. Saying these “can never match” means the demon is saying “the laws of nature do not suffice to go from some this initial condition to one of its outcomes” which is to deny Laplacian demons altogether.

The demon is not saying that the laws of nature do not suffice to go from an initial condition to an outcome. It is saying that “what will actually be written” is part of the initial conditions, and that it is an initial condition that is a determining factor that prevents itself from matching the outcome. In the case of the dropping ball above, covering the spot with your hand is an initial condition, and it absolutely prevents the outcome being that the ball first hits there. In the case of f(x), x is an initial condition, and it prevents the outcome from being x, since it will always be x+1. In the same way, in Chastek’s experiment, what is written is an initial condition which prevents the outcome from being that thing which was written.

If you answer, as you should, that God cannot write what the person will do, but he can know it, the same applies to the Laplacian demon.

When God announces what will happen he can be speaking about what he intends to do, while a LD cannot. I’m also very impressed by John of St. Thomas’s arguments that the world is not only notionally present to God but even physically present within him, which makes for a dimension of his speaking of the future that could never be said of an LD. This is in keeping with the Biblical idea that God not only looks at the world but responds and interacts with it. The character of prophesy is also very different from the thought experiment we’re trying to do with an LD: LD’s are all about what we can predict in advance, but Biblical prophesies do not seem to be overly concerned with what can be predicted in advance, as should be shown from the long history of failed attempts to turn the NT into a predictive tool.

If God says, “the outcome will be A,” and then consistently causes the person to choose A even when the person has hostile intentions, this will be contrary to our experience in the same way that the Laplacian demon would violate our experience if it always got the outcome right. You can respond, “ok, but that’s fine, because we’re admitting that God is a cause, but the Laplacian demon is not supposed to be affecting the outcome.” The problem with the response is that God is supposed to be the cause all of the time, not merely some of the time; so why should he not also say what is going to happen, since he is causing it anyway?

I agree that prophecy in the real world never tells us much detail about the future in fact, and this is verified in all biblical prophecies and in all historical cases such as the statements about the future made by the Fatima visionaries. I also say that even in principle God could not consistently predict in advance a person’s actions, and show him those predictions, without violating his experience of choice, but I say that this is for the reasons given here.

But the point of my objection was not about how prophecy works in the real world. The point was that Catholic doctrine seems to imply that God could, if he wanted, announce what the daily weather is going to be for the next year. It would not bother me personally if this turns out to be completely impossible; but is Chastek prepared to say the same? The real issues with the Laplacian demon are the same: knowing exactly what is going to happen, and to what degree it can announce what it knows.

we can easily train a dog in such a way that no one can possibly write down the levers it will choose, since it will be trained to choose the opposite ones.

Such an animal would follow instructions with some errors, and so would be a fine test subject for my experiment. This is exactly what my subject does in trial #1. I say the same for your robot example.

(ADDED LATER) I’m thankful for this point and developed for reasons given above on the thread.

This seems to indicate the source of the confusion, relative to my examples of covering the place where the ball hits, and the case of the function f(x) = x+1. There is no error rate in these situations: the ball never hits the spot you cover, and f(x) never equals x.

But this is really quite irrelevant. The reason the Laplacian demon says that the experiment is impossible has nothing to do with the error rate, but with the anti-correlation between what is written and the outcome. Consider: suppose in fact you never make a mistake. There is no error rate. Nonetheless, the demon still cannot say what you are going to do, because you always do the opposite of what it says. Likewise, even if the dog never fails to do what it was trained to do, it is impossible for the Laplacian demon to say what it is going to do, since it always does the opposite. The same is true for the robot. In other words, my examples show the reason why the experiment is impossible, without implying that a Laplacian demon is impossible.

We can easily reconstruct my examples to contain an error rate, and nonetheless prediction will be impossible for the same reasons, without implying that anything is indeterminate. For example:

Suppose that the world is such that every tenth time you try to cover a spot, your hand slips off and stops blocking it. I specify every tenth time to show that determinism has nothing to do with this: the setup is completely determinate. In this situation, you are able to indicate the spot where the ball will hit every tenth time, but no more often than that.

Likewise suppose we have f(x) = x+1, with one exception such that f(5) = 5. If we then ask the Laplacian demon (namely ourselves) to provide five x such that the output equals the input, we will not be able to do it in five cases, but we will be able to do it in one. Since this universe (the functional universe) is utterly deterministic, the fact that we cannot present five such cases does not indicate something indeterminate. It just indicates a determinate fact about how the function universe works.

As for the third set, if I understood it correctly you are indeed cherry picking — you are simply selecting the trials where the human made a mistake,

LD’s can’t be mistaken. If they foresee outcome O from initial conditions C, then no mistake can fail to make O come about. But this isn’t my main point, which is simply to repeat what I said to David: cherry picking requires disregarding evidence that goes against your conclusion, but the times when the random number generator and the LD disagree provide no evidence whether LD’s are consistent with our experience of following instructions with some errors.

I said “if I understood it correctly” because the situation was not clearly laid out. I understood the setup to be this: the Laplacian demon writes out fifty letters, A or B, being the letters it sees that I am going to write. It does not show me this series of letters. Instead, a random process outputs a series of letters, A or B, and each time I try to select the opposite letter.

Given this setup, what the Laplacian demon writes always matches what I select. And most of the time, both are the opposite of what was output by the random process. But occasionally I make a mistake, that is, I fail to select the opposite letter, and choose the same letter that the random process chose. In these cases, since the Laplacian demon still knew what was going to happen, the demon’s letter also matches the random process letter, and my letter.

Now, Chastek says, consider only the cases where the demon’s letter is the same as the random process letter. It will turn out that over those cases, I have a 100% failure rate: that is, in every such case I selected the same letter as the random process. According to him, we should consider this surprising, since we would not normally have a 100% failure rate. This is not cherry picking, he says, because “the times when the random number generator and the LD disagree provide no evidence whether LD’s are consistent with our experience of following instructions with some errors.”

The problem with this should be obvious. Let us consider demon #2: he looks at what the person writes, and then writes down the same thing. Is this demon possible? There will be some cases where demon #2 writes down the opposite of what the random process output: those will be the cases where the person did not make a mistake. But there will be other cases where the person makes a mistake. In those cases, what the person writes, and what demon #2 writes, will match the output of the random process. Consider only those cases. The person has a 100% failure rate in those cases. The cases where the random process and demon #2 disagree provide no evidence whether demon #2 is consistent with our experience, so this is not cherry picking. Now it is contrary to our experience to have a 100% failure rate. So demon #2 is impossible.

This result is of course absurd – demon#2 is obviously entirely possible, since otherwise making copies of things would be impossible. This is sufficient to establish that Chastek’s response is mistaken. He is indeed cherry picking: he simply selected the cases where the human made a mistake, and noted that there was a 100% failure rate in those cases.

In other words, we do not need a formal answer to Chastek’s objection to see that there is something very wrong with it; but the formal answer is that the cases where the demon disagrees with the random process do indeed provide some evidence. They question is whether the existence of the demon is consistent with “our experience of following instructions with some errors.” But we cannot have this experience without sometimes following the instructions correctly; being right is part of this experience, just like being wrong. And the cases where the demon disagrees with the random process are cases where we follow the instructions correctly, and such cases provide evidence that the demon is possible.

Chastek provides an additional comment about the case of the dog:

Just a note, one point I am thankful to EU for is the idea that a trained dog might be a good test subject too. If this is right, then the recursive loop might not be from intelligence as such but the intrinsic indeterminism of nature, which we find in one way through (what Aristotle called) matter being present in the initial conditions and the working of the laws and in another through intelligence. But space is opened for one with the allowing of the other, since on either account nature has to allow for teleology.

I was pointing to St. Thomas in my response with the hope that St. Thomas’s position would at least be seen as reasonable; and there is no question that St. Thomas believes that there is no indeterminism whatsoever in the behavior of a dog. If a dog is in the same situation, he believes, it will do exactly the same thing. In any case, Chastek does not address this, so I will not try at this time to establish the fact of St. Thomas’s position.

The main point is that, as we have already shown, the reason it is impossible to predict what the dog will do has nothing to do with indeterminism, since such prediction is impossible even if the dog is infallible, and remains impossible even if the dog has a deterministic error rate.

The comment, “But space is opened for one with the allowing of the other, since on either account nature has to allow for teleology,” may indicate why Chastek is so insistent in his error: in his opinion, if nature is deterministic, teleology is impossible. This is a mistake much like Robin Hanson’s mistake explained in the previous post. But again I will leave this for later consideration.

I will address one last comment:

I agree the physical determinist’s equation can’t be satisfied for all values, and that what makes it possible is the presence of a sort of recursion. But in the context of the experiment this means that the letter on a sheet of paper together with a snapshot of the rest of the universe can never be an initial condition, but I see no reason why this would be the case. Even if I granted their claim that there was some recursive contradiction, it does not arise merely because the letter is given in advance, since the LD could print out the letter in advance just fine if the initial conditions were, say, a test particle flying though empty space toward button A with enough force to push it.

It is true that the contradiction does not arise just because the Laplacian demon writes down the letter. There is no contradiction even in the human case, if the demon does not show it to the human. Nor does anything contrary to our experience happen in such a case. The case which is contrary to our experience is when the demon shows the letter to the person; and this is indeed impossible on account of a recursive contradiction, not because the demon is impossible.

Consider the case of the test particle flying towards button A: it is not a problem for the demon to write down the outcome precisely because what is written has no particular influence, in this case, on the outcome.

But if “writing the letter” means covering the button, as in our example of covering the spot where the ball will hit, then the demon will not be able to write the outcome in advance. And obviously this will not mean there is any indeterminism.

The contradiction comes about because covering the button prevents the button from being pushed. And the contradiction comes about in the human case in exactly the same way: writing a letter causes, via the human’s intention to follow the instructions, the opposite outcome. Again indeterminism has nothing to do with this: the same thing will happen if the human is infallible, or if the human has an error rate which has deterministic causes.

“This means that the letter on a sheet of paper together with a snapshot of the rest of the universe can never be an initial condition.” No, it means that in some of the cases, namely those where the human will be successful in following instructions, the letter with the rest of the universe cannot be an initial condition where the outcome is the same as what is written. While there should be no need to repeat the reasons for this at this point, the reason is that “what is written” is a cause of the opposite outcome, and whether that causality is deterministic or indeterministic has nothing to do with the impossibility. The letter can indeed be an initial condition: but it is an initial condition where the outcome is the opposite of the letter, and the demon knows all this.

Age of Em

This is Robin Hanson’s first book. Hanson gradually introduces his topic:

You, dear reader, are special. Most humans were born before 1700. And of those born after, you are probably richer and better educated than most. Thus you and most everyone you know are special, elite members of the industrial era.

Like most of your kind, you probably feel superior to your ancestors. Oh, you don’t blame them for learning what they were taught. But you’d shudder to hear of many of your distant farmer ancestors’ habits and attitudes on sanitation, sex, marriage, gender, religion, slavery, war, bosses, inequality, nature, conformity, and family obligations. And you’d also shudder to hear of many habits and attitudes of your even more ancient forager ancestors. Yes, you admit that lacking your wealth your ancestors couldn’t copy some of your habits. Even so, you tend to think that humanity has learned that your ways are better. That is, you believe in social and moral progress.

The problem is, the future will probably hold new kinds of people. Your descendants’ habits and attitudes are likely to differ from yours by as much as yours differ from your ancestors. If you understood just how different your ancestors were, you’d realize that you should expect your descendants to seem quite strange. Historical fiction misleads you, showing your ancestors as more modern than they were. Science fiction similarly misleads you about your descendants.

As an example of the kind of past difference that Robin is discussing, even in the fairly recent past, consider this account by William Ewald of a trial from the sixteenth century:

In 1522 some rats were placed on trial before the ecclesiastical court in Autun. They were charged with a felony: specifically, the crime of having eaten and wantonly destroyed some barley crops in the jurisdiction. A formal complaint against “some rats of the diocese” was presented to the bishop’s vicar, who thereupon cited the culprits to appear on a day certain, and who appointed a local jurist, Barthelemy Chassenée (whose name is sometimes spelled Chassanée, or Chasseneux, or Chasseneuz), to defend them. Chassenée, then forty-two, was known for his learning, but not yet famous; the trial of the rats of Autun was to establish his reputation, and launch a distinguished career in the law.

When his clients failed to appear in court, Chassenée resorted to procedural arguments. His first tactic was to invoke the notion of fair process, and specifically to challenge the original writ for having failed to give the rats due notice. The defendants, he pointed out, were dispersed over a large tract of countryside, and lived in many villages; a single summons was inadequate to notify them all. Moreover, the summons was addressed only to some of the rats of the diocese; but technically it should have been addressed to them all.

Chassenée was successful in his argument, and the court ordered a second summons to be read from the pulpit of every local parish church; this second summons now correctly addressed all the local rats, without exception.

But on the appointed day the rats again failed to appear. Chassenée now made a second argument. His clients, he reminded the court, were widely dispersed; they needed to make preparations for a great migration, and those preparations would take time. The court once again conceded the reasonableness of the argument, and granted a further delay in the proceedings. When the rats a third time failed to appear, Chassenée was ready with a third argument. The first two arguments had relied on the idea of procedural fairness; the third treated the rats as a class of persons who were entitled to equal treatment under the law. He addressed the court at length, and successfully demonstrated that, if a person is cited to appear at a place to which he cannot come in safety, he may lawfully refuse to obey the writ. And a journey to court would entail serious perils for his clients. They were notoriously unpopular in the region; and furthermore they were rightly afraid of their natural enemies, the cats. Moreover (he pointed out to the court) the cats could hardly be regarded as neutral in this dispute; for they belonged to the plaintiffs. He accordingly demanded that the plaintiffs be enjoined by the court, under the threat of severe penalties, to restrain their cats, and prevent them from frightening his clients. The court again found this argument compelling; but now the plaintiffs seem to have come to the end of their patience. They demurred to the motion; the court, unable to settle on the correct period within which the rats must appear, adjourned on the question sine die, and judgment for the rats was granted by default.

Most of us would assume at once that this is all nothing but an elaborate joke; but Ewald strongly argues that it was all quite serious. This would actually be worthy of its own post, but I will leave it aside for now. In any case it illustrates the existence of extremely different attitudes even a few centuries ago.

In any event, Robin continues:

New habits and attitudes result less than you think from moral progress, and more from people adapting to new situations. So many of your descendants’ strange habits and attitudes are likely to violate your concepts of moral progress; what they do may often seem wrong. Also, you likely won’t be able to easily categorize many future ways as either good or evil; they will instead just seem weird. After all, your world hardly fits the morality tales your distant ancestors told; to them you’d just seem weird. Complex realities frustrate simple summaries, and don’t fit simple morality tales.

Many people of a more conservative temperament, such as myself, might wish to swap out “moral progress” here with “moral regress,” but the point stands in any case. This is related to our discussions of the effects of technology and truth on culture, and of the idea of irreversible changes.

Robin finally gets to the point of his book:

This book presents a concrete and plausible yet troubling view of a future full of strange behaviors and attitudes. You may have seen concrete troubling future scenarios before in science fiction. But few of those scenarios are in fact plausible; their details usually make little sense to those with expert understanding. They were designed for entertainment, not realism.

Perhaps you were told that fictional scenarios are the best we can do. If so, I aim to show that you were told wrong. My method is simple. I will start with a particular very disruptive technology often foreseen in futurism and science fiction: brain emulations, in which brains are recorded, copied, and used to make artificial “robot” minds. I will then use standard theories from many physical, human, and social sciences to describe in detail what a world with that future technology would look like.

I may be wrong about some consequences of brain emulations, and I may misapply some science. Even so, the view I offer will still show just how troublingly strange the future can be.

I greatly enjoyed Robin’s book, but unfortunately I have to admit that relatively few people will in general. It is easy enough to see the reason for this from Robin’s introduction. Who would expect to be interested? Possibly those who enjoy the “futurism and science fiction” concerning brain emulations; but if Robin does what he set out to do, those persons will find themselves strangely uninterested. As he says, science fiction is “designed for entertainment, not realism,” while he is attempting to answer the question, “What would this actually be like?” This intention is very remote from the intention of the science fiction, and consequently it will likely appeal to different people.

Whether or not Robin gets the answer to this question right, he definitely succeeds in making his approach and appeal differ from those of science fiction.

One might illustrate this with almost any random passage from the book. Here are portions of his discussion of the climate of em cities:

As we will discuss in Chapter 18, Cities section, em cities are likely to be big, dense, highly cost-effective concentrations of computer and communication hardware. How might such cities interact with their surroundings?

Today, computer and communication hardware is known for being especially temperamental about its environment. Rooms and buildings designed to house such hardware tend to be climate-controlled to ensure stable and low values of temperature, humidity, vibration, dust, and electromagnetic field intensity. Such equipment housing protects it especially well from fire, flood, and security breaches.

The simple assumption is that, compared with our cities today, em cities will also be more climate-controlled to ensure stable and low values of temperature, humidity, vibrations, dust, and electromagnetic signals. These controls may in fact become city level utilities. Large sections of cities, and perhaps entire cities, may be covered, perhaps even domed, to control humidity, dust, and vibration, with city utilities working to absorb remaining pollutants. Emissions within cities may also be strictly controlled.

However, an em city may contain temperatures, pressures, vibrations, and chemical concentrations that are toxic to ordinary humans. If so, ordinary humans are excluded from most places in em cities for safety reasons. In addition, we will see in Chapter 18, Transport section, that many em city transport facilities are unlikely to be well matched to the needs of ordinary humans.

Cities today are the roughest known kind of terrain, in the sense that cities slow down the wind the most compared with other terrain types. Cities also tend to be hotter than neighboring areas. For example, Las Vegas is 7 ° Fahrenheit hotter in the summer than are surrounding areas. This hotter city effect makes ozone pollution worse and this effect is stronger for bigger cities, in the summer, at night, with fewer clouds, and with slower wind (Arnfield 2003).

This is a mild reason to expect em cities to be hotter than other areas, especially at night and in the summer. However, as em cities are packed full of computing hardware, we shall now see that em cities will  actually be much hotter.

While the book considers a wide variety of topics, e.g. the social relationships among ems, which look quite different from the above passage, the general mode of treatment is the same. As Robin put it, he uses “standard theories” to describe the em world, much as he employs standard theories about cities, about temperature and climate, and about computing hardware in the above passage.

One might object that basically Robin is positing a particular technological change (brain emulations), but then assuming that everything else is the same, and working from there. And there is some validity to this objection. But in the end there is actually no better way to try to predict the future; despite David Hume’s opinion, generally the best way to estimate the future is to say, “Things will be pretty much the same.”

At the end of the book, Robin describes various criticisms. First are those who simply said they weren’t interested: “If we include those who declined to read my draft, the most common complaint is probably ‘who cares?'” And indeed, that is what I would expect, since as Robin remarked himself, people are interested in an entertaining account of the future, not an attempt at a detailed description of what is likely.

Others, he says, “doubt that one can ever estimate the social consequences of technologies decades in advance.” This is basically the objection I mentioned above.

He lists one objection that I am partly in agreement with:

Many doubt that brain emulations will be our next huge technology change, and aren’t interested in analyses of the consequences of any big change except the one they personally consider most likely or interesting. Many of these people expect traditional artificial intelligence, that is, hand-coded software, to achieve broad human level abilities before brain emulations appear. I think that past rates of progress in coding smart software suggest that at previous rates it will take two to four centuries to achieve broad human level abilities via this route. These critics often point to exciting recent developments, such as advances in “deep learning,” that they think make prior trends irrelevant.

I don’t think Robin is necessarily mistaken in regard to his expectations about “traditional artificial intelligence,” although he may be, and I don’t find myself uninterested by default in things that I don’t think the most likely. But I do think that traditional artificial intelligence is more likely than his scenario of brain emulations; more on this below.

There are two other likely objections that Robin does not include in this list, although he does touch on them elsewhere. First, people are likely to say that the creation of ems would be immoral, even if it is possible, and similarly that the kinds of habits and lives that he describes would themselves be immoral. On the one hand, this should not be a criticism at all, since Robin can respond that he is simply describing what he thinks is likely, not saying whether it should happen or not; on the other hand, it is in fact obvious that Robin does not have much disapproval, if any, of his scenario. The book ends in fact by calling attention to this objection:

The analysis in this book suggests that lives in the next great era may be as different from our lives as our lives are from farmers’ lives, or farmers’ lives are from foragers’ lives. Many readers of this book, living industrial era lives and sharing industrial era values, may be disturbed to see a forecast of em era descendants with choices and life styles that appear to reject many of the values that they hold dear. Such readers may be tempted to fight to prevent the em future, perhaps preferring a continuation of the industrial era. Such readers may be correct that rejecting the em future holds them true to their core values.

But I advise such readers to first try hard to see this new era in some detail from the point of view of its typical residents. See what they enjoy and what fills them with pride, and listen to their criticisms of your era and values. This book has been designed in part to assist you in such a soul-searching examination. If after reading this book, you still feel compelled to disown your em descendants, I cannot say you are wrong. My job, first and foremost, has been to help you see your descendants clearly, warts and all.

Our own discussions of the flexibility of human morality are relevant. The creatures Robin is describing are in many ways quite different from humans, and it is in fact very appropriate for their morality to differ from human morality.

A second likely objection is that Robin’s ems are simply impossible, on account of the nature of the human mind. I think that this objection is mistaken, but I will leave the details of this explanation for another time. Robin appears to agree with Sean Carroll about the nature of the mind, as can be seen for example in this post. Robin is mistaken about this, for the reasons suggested in my discussion of Carroll’s position. Part of the problem is that Robin does not seem to understand the alternative. Here is a passage from the linked post on Overcoming Bias:

Now what I’ve said so far is usually accepted as uncontroversial, at least when applied to the usual parts of our world, such as rivers, cars, mountains laptops, or ants. But as soon as one claims that all this applies to human minds, suddenly it gets more controversial. People often state things like this:

“I am sure that I’m not just a collection of physical parts interacting, because I’m aware that I feel. I know that physical parts interacting just aren’t the kinds of things that can feel by themselves. So even though I have a physical body made of parts, and there are close correlations between my feelings and the states of my body parts, there must be something more than that to me (and others like me). So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care mainly about feelings, not physical parts interacting; we want to know what out there feels so we can know what to care about.”

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

There is a false dichotomy here, and it is the same one that C.S. Lewis falls into when he says, “Either we can know nothing or thought has reasons only, and no causes.” And in general it is like the error of the pre-Socratics, that if a thing has some principles which seem sufficient, it can have no other principles, failing to see that there are several kinds of cause, and each can be complete in its own way. And perhaps I am getting ahead of myself here, since I said this discussion would be for later, but the objection that Robin’s scenario is impossible is mistaken in exactly the same way, and for the same reason: people believe that if a “materialistic” explanation could be given of human behavior in the way that Robin describes, then people do not truly reason, make choices, and so on. But this is simply to adopt the other side of the false dichotomy, much like C.S. Lewis rejects the possibility of causes for our beliefs.

One final point. I mentioned above that I see Robin’s scenario as less plausible than traditional artificial intelligence. I agree with Tyler Cowen in this post. This present post is already long enough, so again I will leave a detailed explanation for another time, but I will remark that Robin and I have a bet on the question.