Start at the Beginning

This post will have two kinds of readers:

1) The few who have read the posts on this blog from the beginning, in chronological order, and who are now reading this one simply because it is the only one you have not read yet.

2) The vast majority who did not do the above.

For the first category, I don’t have any particular suggestion at the moment. Well done. That is the right way of reading this blog.

For the second category, you would do much better to stop right here in the middle of this post (without even finishing it), go back to the beginning, and read every post in chronological order.

….

So you are now in the first category? No? Since obviously you did not take my advice, let me explain both why you should, and why you will not.

It is possible to understand something through arguments, even if manipulating symbols may be an even more common result. And since conclusions follow from premises, you can only do this by thinking about the premises first, and the conclusions second. Since my own interest is in understanding things, I intentionally organize the blog in this way. Of course, since the concrete historical process of an individual coming to understand some particular thing is messier and more complicated than a single argument or even than multiple arguments, the order isn’t an exact representation of my own history or someone else’s potential history. But it is certainly closer to that than any other order of reading would be.

You will object that you do not have the time to read 300 blog posts. Fine. But then why do you have time to read this one? Even if you are definitely committed to reading a small number of posts, you would do better to read a small number from the beginning. If you are committed to reading not more than one post a week, you would do better to read the 300 posts over the next six years, rather than reading the posts that are current.

You might think of other similar objections, but they will all fail in similar ways. If you are actually interested in understanding something from your reading, chronological order is the right order.

Of course, other blog authors might well argue in similar ways, but the number of people who actually do this, on any blog, is tiny. Instead, people read a few recent posts, and perhaps a few others if there are a chain of links that lead them there. But they do not, in the vast majority of cases, read from the beginning, whether to read all or only a part.

So let me explain why you will not take this advice, despite the fact that it is irrefutably correct. In The Elephant in the Brain, Robin Hanson and Kevin Simler remark in a chapter on conversation:

This view of talking—as a way of showing off one’s “backpack”—explains the puzzles we encountered earlier, the ones that the reciprocal-exchange theory had trouble with. For example, it explains why we see people jockeying to speak rather than sitting back and “selfishly” listening—because the spoils of conversation don’t lie primarily in the information being exchanged, but rather in the subtextual value of finding good allies and advertising oneself as an ally. And in order to get credit in this game, you have to speak up; you have to show off your “tools.”

But why do speakers need to be relevant in conversation? If speakers deliver high-quality information, why should listeners care whether the information is related to the current topic? A plausible answer is that it’s simply too easy to rattle off memorized trivia. You can recite random facts from the encyclopedia until you’re blue in the face, but that does little to advertise your generic facility with information.

Similarly, when you meet someone for the first time, you’re more eager to sniff each other out for this generic skill, rather than to exchange the most important information each of you has gathered to this point in your lives. In other words, listeners generally prefer speakers who can impress them wherever a conversation happens to lead, rather than speakers who steer conversations to specific topics where they already know what to say.

Hanson and Simler are trying to explain various characteristics of conversation, such as the fact that people are typically more interested in speaking than in listening, as well as the requirement that conversational participants “stick to the topic.”

Later, they associate this with people’s interest in news:

Why have humans long been so obsessed with news? When asked to justify our strong interest, we often point to the virtues of staying apprised of the important issues of the day. During a 1945 newspaper strike in New York, for example, when the sociologist Bernard Berelson asked his fellow citizens, “Is it very important that people read the newspaper?” almost everyone answered with a “strong ‘yes,’ ” and most people cited the “ ‘serious’ world of public affairs.”

Now, it did make some sense for our ancestors to track news as a way to get practical information, such as we do today for movies, stocks, and the weather. After all, they couldn’t just go easily search for such things on Google like we can. But notice that our access to Google hasn’t made much of a dent in our hunger for news; if anything we read more news now that we have social media feeds, even though we can find a practical use for only a tiny fraction of the news we consume.

There are other clues that we aren’t mainly using the news to be good citizens (despite our high-minded rhetoric). For example, voters tend to show little interest in the kinds of information most useful for voting, including details about specific policies, the arguments for and against them, and the positions each politician has taken on each policy. Instead, voters seem to treat elections more like horse races, rooting for or against different candidates rather than spending much effort to figure out who should win. (See Chapter 16 for a more detailed discussion on politics.)

These patterns in behavior may be puzzling when we think of news as a source of useful information. But they make sense if we treat news as a larger “conversation” that extends our small-scale conversation habits. Just as one must talk on the current topic in face-to-face conversation, our larger news conversation also maintains a few “hot” topics—a focus so strong and so narrow that policy wonks say that there’s little point in releasing policy reports on topics not in the news in the last two weeks. (This is the criterion of relevance we saw earlier.)

The argument here suggests that blog readers will tend to prefer reading current posts to old ones because this is to remain more “relevant,” and that such relevance is necessary in order to impress other conversational participants. This, I suggest, is why you will not take my advice, despite its rightness. If you think this is an insulting explanation, just bear in mind that blog authors are even more insulted by Hanson’s and Simler’s explanations, since the reader at least is listening.

Advertisements

Hard Problem of Consciousness

We have touched on this in various places, and in particular in this discussion of zombies, but we are now in a position to give a more precise answer.

Bill Vallicella has a discussion of Thomas Nagel on this issue:

Nagel replies in the pages of NYRB (8 June 2017; HT: Dave Lull) to one Roy Black, a professor of bioengineering:

The mind-body problem that exercises both Daniel Dennett and me is a problem about what experience is, not how it is caused. The difficulty is that conscious experience has an essentially subjective character—what it is like for its subject, from the inside—that purely physical processes do not share. Physical concepts describe the world as it is in itself, and not for any conscious subject. That includes dark energy, the strong force, and the development of an organism from the egg, to cite Black’s examples. But if subjective experience is not an illusion, the real world includes more than can be described in this way.

I agree with Black that “we need to determine what ‘thing,’ what activity of neurons beyond activating other neurons, was amplified to the point that consciousness arose.” But I believe this will require that we attribute to neurons, and perhaps to still more basic physical things and processes, some properties that in the right combination are capable of constituting subjects of experience like ourselves, to whom sunsets and chocolate and violins look and taste and sound as they do. These, if they are ever discovered, will not be physical properties, because physical properties, however sophisticated and complex, characterize only the order of the world extended in space and time, not how things appear from any particular point of view.

The problem might be condensed into an aporetic triad:

1) Conscious experience is not an illusion.

2) Conscious experience has an essentially subjective character that purely physical processes do not share.

3) The only acceptable explanation of conscious experience is in terms of physical properties alone.

Take a little time to savor this problem. Note first that the three propositions are collectively inconsistent: they cannot all be true.  Any two limbs entail the negation of the remaining one. Note second that each limb exerts a strong pull on our acceptance.  But we cannot accept them all because they are logically incompatible.

Which proposition should we reject? Dennett, I take it, would reject (1). But that’s a lunatic solution as Professor Black seems to appreciate, though he puts the point more politely. When I call Dennett a sophist, as I have on several occasions, I am not abusing him; I am underscoring what is obvious, namely, that the smell of cooked onions, for example, is a genuine datum of experience, and that such phenomenological data trump scientistic theories.

Sophistry aside, we either reject (2) or we reject (3).  Nagel and I accept (1) and (2) and reject (3). Black, and others of the scientistic stripe, accept (1) and (3) and reject (2).

In order to see the answer to this, we can construct a Parmenidean parallel to Vallicella’s aporetic triad:

1) Distinction is not an illusion.

2) Being has an essentially objective character of actually being that distinction does not share (considering that distinction consists in the fact of not being something.)

3) The only acceptable explanation of distinction is in terms of being alone (since there is nothing but being to explain things with.)

Parmenides rejects (1) here. What approach would Vallicella take? If he wishes to take a similarly analogous approach, he should accept (1) and (2), and deny (3). And this would be a pretty commonsense approach, and perhaps the one that most people implicitly adopt if they ever think about the problem.

At the same time, it is easy to see that (3) is approximately just as obviously true as (1); and it is for this reason that Parmenides sees rejecting (1) and accepting (2) and (3) as reasonable.

The correct answer, of course, is that the three are not inconsistent despite appearances. In fact, we have effectively answered this in recent posts. Distinction is not an illusion, but a way that we understand things, as such. And being a way of understanding, it is not (as such) a way of being mistaken, and thus it is not an illusion, and thus the first point is correct. Again, being a way of understanding, it is not a way of being as such, and thus the second point is correct. And yet distinction can be explained by being, since there is something (namely relationship) which explains why it is reasonable to think in terms of distinctions.

Vallicella’s triad mentions “purely physical processes” and “physical properties,” but the idea of “physical” here is a distraction, and is not really relevant to the problem. Consider the following from another post by Vallicella:

If I understand Galen Strawson’s view, it is the first.  Conscious experience is fully real but wholly material in nature despite the fact that on current physics we cannot account for its reality: we cannot understand how it is possible for qualia and thoughts to be wholly material.   Here is a characteristic passage from Strawson:

Serious materialists have to be outright realists about the experiential. So they are obliged to hold that experiential phenomena just are physical phenomena, although current physics cannot account for them.  As an acting materialist, I accept this, and assume that experiential phenomena are “based in” or “realized in” the brain (to stick to the human case).  But this assumption does not solve any problems for materialists.  Instead it obliges them to admit ignorance of the nature of the physical, to admit that they don’t have a fully adequate idea of what the physical is, and hence of what the brain is.  (“The Experiential and the Non-Experiential” in Warner and Szubka, p. 77)

Strawson and I agree on two important points.  One is that what he calls experiential phenomena are as real as anything and cannot be eliminated or reduced to anything non-experiential. Dennett denied! The other is that there is no accounting for experiential items in terms of current physics.

I disagree on whether his mysterian solution is a genuine solution to the problem. What he is saying is that, given the obvious reality of conscious states, and given the truth of naturalism, experiential phenomena must be material in nature, and that this is so whether or not we are able to understand how it could be so.  At present we cannot understand how it could be so. It is at present a mystery. But the mystery will dissipate when we have a better understanding of matter.

This strikes me as bluster.

An experiential item such as a twinge of pain or a rush of elation is essentially subjective; it is something whose appearing just is its reality.  For qualia, esse = percipi.  If I am told that someday items like this will be exhaustively understood from a third-person point of view as objects of physics, I have no idea what this means.  The notion strikes me as absurd.  We are being told in effect that what is essentially subjective will one day be exhaustively understood as both essentially subjective and wholly objective.  And that makes no sense. If you tell me that understanding in physics need not be objectifying understanding, I don’t know what that means either.

Here Vallicella uses the word “material,” which is presumably equivalent to “physical” in the above discussion. But it is easy to see here that being material is not the problem: being objective is the problem. Material things are objective, and Vallicella sees an irreducible opposition between being objective and being subjective. In a similar way, we can reformulate Vallicella’s original triad so that it does not refer to being physical:

1) Conscious experience is not an illusion.

2) Conscious experience has an essentially subjective character that purely objective processes do not share.

3) The only acceptable explanation of conscious experience is in terms of objective properties alone.

It is easy to see that this formulation is the real source of the problem. And while Vallicella would probably deny (3) even in this formulation, it is easy to see why people would want to accept (3). “Real things are objective,” they will say. If you want to explain anything, you should explain it using real things, and therefore objective things.

The parallel with the Parmenidean problem is evident. We would want to explain distinction in terms of being, since there isn’t anything else, and yet this seems impossible, so one (e.g. Parmenides) is tempted to deny the existence of distinction. In the same way, we would want to explain subjective experience in terms of objective facts, since there isn’t anything else, and yet this seems impossible, so one (e.g. Dennett) is tempted to deny the existence of subjective experience.

Just as the problem is parallel, the correct solution will be almost entirely parallel to the solution to the problem of Parmenides.

1) Conscious experience is not an illusion. It is a way of perceiving the world, not a way of not perceiving the world, and definitely not a way of not perceiving at all.

2) Consciousness is subjective, that is, it is a way that an individual perceives the world, not a way that things are as such, and thus not an “objective fact” in the sense that “the way things are” is objective.

3) The “way things are”, namely the objective facts, are sufficient to explain why individuals perceive the world. Consider again this post, responding to a post by Robin Hanson. We could reformulate his criticism to express instead Parmenides’s criticism of common sense (changed parts in italics):

People often state things like this:

I am sure that there is not just being, because I’m aware that some things are not other things. I know that being just isn’t non-being. So even though there is being, there must be something more than that to reality. So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care about distinctions, not just being; we want to know what out there is distinct from which other things.

But consider a key question: Does this other distinction stuff interact with the parts of our world that actually exist strongly and reliably enough to usually be the actual cause of humans making statements of distinction like this?

If yes, this is a remarkably strong interaction, making it quite surprising that philosophers, possibly excepting Duns Scotus, have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite understandable with existing philosophy. Any interaction not so understandable would have be vastly more difficult to understand than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will understand such an interaction.

But if no, if this interaction isn’t strong enough to explain human claims of distinction, then we have a remarkable coincidence to explain. Somehow this extra distinction stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that distinction stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if distinction stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that distinction stuff actually exists? Such a coincidence seems too remarkable to be believed.

“Distinction stuff”, of course, does not exist, and neither does “feeling stuff.” But some things are distinct from others. Saying this is a way of understanding the world, and it is a reasonable way to understand the world because things exist relative to one another. And just as one thing is distinct from another, people have experiences. Those experiences are ways of knowing the world (broadly understood.) And just as reality is sufficient to explain distinction, so reality is sufficient to explain the fact that people have experiences.

How exactly does this answer the objection about interaction? In the case of distinction, the fact that “one thing is not another” is never the direct cause of anything, not even of the fact that “someone believes that one thing is not another.” So there would seem to be a “remarkable coincidence” here, or we would have to say that since the fact seems unrelated to the opinion, there is no reason to believe people are right when they make distinctions.

The answer in the case of distinction is that one thing is related to another, and this fact is the cause of someone believing that one thing is not another. There is no coincidence, and no reason to believe that people are mistaken when they make distinctions, despite the fact that distinction as such causes nothing.

In a similar way, “a human being is what it is,” and “a human being does what it does” (taken in an objective sense), cause human beings to say and believe that they have subjective experience (taking saying and believing to refer to objective facts.) But this is precisely where the zombie question arises: they say and believe that they have subjective experience, when we interpret say and believe in the objective sense. But do they actually say and believe anything, considering saying and believing as including the subjective factor? Namely, when a non-zombie says something, it subjectively understands the meaning of what it is saying, and when it consciously believes something, it has a subjective experience of doing that, but these things would not apply to a zombie.

But notice that we can raise a similar question about zombie distinctions. When someone says and believes that one thing is not another, objective reality is similarly the cause of them making the distinction. But is the one thing actually not the other? But there is no question at all here except of whether the person’s statement is true or false. And indeed, someone can say, e.g, “The person who came yesterday is not the person who came today,” and this can sometimes be false. In a similar way, asking whether an apparent person is a zombie or not is just asking whether their claim is true or false when they say they have a subjective experience. The difference is that if the (objective) claim is false, then there is no claim at all in the subjective sense of “subjectively claiming something.” It is a contradiction to subjectively make the false claim that you are subjectively claiming something, and thus, this cannot happen.

Someone may insist: you yourself, when you subjectively claim something, cannot be mistaken for the above reason. But you have no way to know whether someone else who apparently is making that claim, is actually making the claim subjectively or not. This is the reason there is a hard problem.

How do we investigate the case of distinction? If we want to determine whether the person who came yesterday is not the person who came today, we do that by looking at reality, despite the fact that distinction as such is not a part of reality as such. If the person who came yesterday is now, today, a mile away from the person who came today, this gives us plenty of reason to say that the one person is not the other. There is nothing strange, however, in the fact that there is no infallible method to prove conclusively, once and for all, that one thing is definitely not another thing. There is not therefore some special “hard problem of distinction.” This is just a result of the fact that our knowledge in general is not infallible.

In a similar way, if we want to investigate whether something has subjective experience or not, we can do that only by looking at reality: what is this thing, and what does it do? Then suppose it makes an apparent claim that it has subjective experience. Obviously, for the above reasons, this cannot be a subjective claim but false: so the question is whether it makes a subjective claim and is right, or rather makes no subjective claim at all. How would you answer this as an external observer?

In the case of distinction, the fact that someone claims that one thing is distinct from another is caused by reality, whether the claim is true or false. So whether it is true or false depends on the way that it is caused by reality. In a similar way, the thing which apparently and objectively claims to possess subjective experience, is caused to do so by objective facts. Again, as in the case of distinction, whether it is true or false will depend on the way that it is caused to do so by objective facts.

We can give some obvious examples:

“This thing claims to possess subjective experience because it is a human being and does what humans normally do.” In this case, the objective and subjective claim is true, and is caused in the right way by objective facts.

“This thing claims to possess subjective experience because it is a very simple computer given a very simple program to output ‘I have subjective experience’ on its screen.” In this case the external claim is false, and it is caused in the wrong way by objective facts, and there is no subjective claim at all.

But how do you know for sure, someone will object. Perhaps the computer really is conscious, and perhaps the apparent human is a zombie. But we could similarly ask how we can know for sure that the person who came yesterday isn’t the same person who came today, even though they appear distant from each other, because perhaps the person is bilocating?

It would be mostly wrong to describe this situation by saying “there really is no hard problem of consciousness,” as Robin Hanson appears to do when he says, “People who think they can conceive of such zombies see a ‘hard question’ regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel.” The implication seems to be that there is no hard question at all. But there is, and the fact that people engage in this discussion proves the existence of the question. Rather, we should say that the question is answerable, and that one it has been answered the remaining questions are “hard” only in the sense that it is hard to understand the world in general. The question is hard in exactly the way the question of Parmenides is hard: “How is it possible for one thing not to be another, when there is only being?” The question of consciousness is similar: “How is it possible for something to have subjective experience, when there are only objective things?” And the question can and should be answered in a similar fashion.

It would be virtually impossible to address every related issue in a simple blog post of this form, so I will simply mention some things that I have mainly set aside here:

1) The issue of formal causes, discussed more in my earlier treatment of this issue. This is relevant because “is this a zombie?” is in effect equivalent to asking whether the thing lacks a formal cause. This is worthy of a great deal of consideration and would go far beyond either this post or the earlier one.

2) The issue of “physical” and “material.” As I stated in this post, this is mainly a distraction. Most of the time, the real question is how the subjective is possible given that we believe that the world is objective. The only relevance of “matter” here is that it is obvious that a material thing is an objective thing. But of course, an immaterial thing would also have to be objective in order to be a thing at all. Aristotle and many philosophers of his school make the specific argument that the human mind does not have an organ, but such arguments are highly questionable, and in my view fundamentally flawed. My earlier posts suffice to call such a conclusion into question, but do not attempt to disprove it, and the the topic would be worthy of additional consideration.

3) Specific questions about “what, exactly, would actually be conscious?” Now neglecting such questions might seem to be a cop-out, since isn’t this what the whole problem was supposed to be in the first place? But in a sense we did answer it. Take an apparent claim of something to be conscious. The question would be this: “Given how it was caused by objective facts to make that claim, would it be a reasonable claim for a subjective claimer to make?” In other words, we cannot assume in advance that it is subjectively making a claim, but if it would be a reasonable claim, it will (in general) be a true one, and therefore also a subjective one, for the same reason that we (in general) make true claims when we reasonably claim that one thing is not another. We have not answered this question only in the same sense that we have not exhaustively explained which things are distinct from which other things, and how one would know. But the question, e.g., “when if ever would you consider an artificial intelligence to be conscious?” is in itself also worthy of direct discussion.

4) The issue of vagueness. This issue in particular will cause some people to object to my answer here. Thus Alexander Pruss brings this up in a discussion of whether a computer could be conscious:

Now, intelligence could plausibly be a vague property. But it is not plausible that consciousness is a vague property. So, there must be some precise transition point in reliability needed for computation to yield consciousness, so that a slight decrease in reliability—even when the actual functioning is unchanged (remember that the Ci are all functioning in the same way)—will remove consciousness.

I responded in the comments there:

The transition between being conscious and not being conscious that happens when you fall asleep seems pretty vague. I don’t see why you find it implausible that “being conscious” could be vague in much the same way “being red” or “being intelligent” might be vague. In fact the evidence from experience (falling asleep etc) seems to directly suggest that it is vague.

Pruss responds:

When I fall asleep, I may become conscious of less and less. But I can’t get myself to deny that either it is definitely true at any given time that I am at least a little conscious or it is definitely true that I am not at all conscious.

But we cannot trust Pruss’s intuitions about what can be vague or otherwise. Pruss claims in an earlier post that there is necessarily a sharp transition between someone’s not being old and someone’s being old. I discussed that post here. This is so obviously false that it gives us a reason in general not to trust Alexander Pruss on the issue of sharp transitions and vagueness. The source of this particular intuition may be the fact that you cannot subjectively make a claim, even vaguely, without some subjective experience, as well as his general impression that vagueness violates the principles of excluded middle and non-contradiction. But in a similar way, you cannot be vaguely old without being somewhat old. This does not mean that there is a sharp transition from not being old to being old, and likewise it does not necessarily mean that there is a sharp transition from not having subjective experience to having it.

While I have discussed the issue of vagueness elsewhere on this blog, this will probably continue to be a reoccurring feature, if only because of those who cannot accept this feature of reality and insist, in effect, on “this or nothing.”

Mary’s Surprising Response

In Consciousness Explained, Daniel Dennett proposes the following continuation to the story of Mary’s room:

And so, one day, Mary’s captors decided it was time for her to see colors. As a trick, they prepared a bright blue banana to present as her first color experience ever. Mary took one look at it and said “Hey! You tried to trick me! Bananas are yellow, but this one is blue!” Her captors were dumfounded. How did she do it? “Simple,” she replied. “You have to remember that I know everything—absolutely everything—that could ever be known about the physical causes and effects of color vision. So of course before you brought the banana in, I had already written down, in exquisite detail, exactly what physical impression a yellow object or a blue object (or a green object, etc.) would make on my nervous system. So I already knew exactly what thoughts I would have (because, after all, the “mere disposition” to think about this or that is not one of your famous qualia, is it?). I was not in the slightest surprised by my experience of blue (what surprised me was that you would try such a second-rate trick on me). I realize it is hard for you to imagine that I could know so much about my reactive dispositions that the way blue affected me came as no surprise. Of course it’s hard for you to imagine. It’s hard for anyone to imagine the consequences of someone knowing absolutely everything physical about anything!”

I don’t intend to fully analyze this scenario here, and for that reason I left it to the reader in the previous post. However, I will make two remarks, one on what is right (or possibly right) about this continuation, and one on what might be wrong about this continuation.

The basically right or possibly right element is that if we assume that Mary knows all there is to know about color, including in its subjective aspect, it is reasonable to believe (even if not demonstrable) that she will be able to recognize the colors the first time she sees them. To gesture vaguely in this direction, we might consider that the color red can be somewhat agitating, while green and blue can be somewhat calming. These are not metaphorical associations, but actual emotional effects that they can have. Thus, if someone can recognize how their experience is affecting their emotions, it would be possible for them to say, “this seems more like the effect I would expect of green or blue, rather than red.” Obviously, this is not proving anything. But then, we do not in fact know what it is like to know everything there is to know about anything. As Dennett continues:

Surely I’ve cheated, you think. I must be hiding some impossibility behind the veil of Mary’s remarks. Can you prove it? My point is not that my way of telling the rest of the story proves that Mary doesn’t learn anything, but that the usual way of imagining the story doesn’t prove that she does. It doesn’t prove anything; it simply pumps the intuition that she does (“it seems just obvious”) by lulling you into imagining something other than what the premises require.

It is of course true that in any realistic, readily imaginable version of the story, Mary would come to learn something, but in any realistic, readily imaginable version she might know a lot, but she would not know everything physical. Simply imagining that Mary knows a lot, and leaving it at that, is not a good way to figure out the implications of her having “all the physical information”—any more than imagining she is filthy rich would be a good way to figure out the implications of the hypothesis that she owned everything.

By saying that the usual way of imagining the story “simply pumps the intuition,” Dennett is neglecting to point out what is true about the usual way of imagining the situation, and in that way he makes his own account seem less convincing. If Mary knows in advance all there is to know about color, then of course if she is asked afterwards, “do you know anything new about color?”, she will say no. But if we simply ask, “Is there anything new here?”, she will say, “Yes, I had a new experience which I never had before. But intellectually I already knew all there was to know about that experience, so I have nothing new to say about it. Still, the experience as such was new.” We are making the same point here as in the last post. Knowing a sensible experience intellectually is not to know in the mode of sense knowledge, but in the mode of intellectual knowledge. So if one then engages in sense knowledge, there will be a new mode of knowing, but not a new thing known. Dennett’s account would be clearer and more convincing if he simply agreed that Mary will indeed acknowledge something new; just not new knowledge.

In relation to what I said might be wrong about the continuation, we might ask what Dennett intended to do in using the word “physical” repeatedly throughout this account, including in phrases like “know everything physical” and “all the physical information.” In my explanation of the continuation, I simply assume that Mary understands all that can be understood about color. Dennett seems to want some sort of limitation to the “physical information” that can be understood about color. But either this is a real limitation, excluding some sorts of claims about color, or it is no limitation at all. If it is not a limitation, then we can simply say that Mary understands everything there is to know about color. If it is a real limitation, then the continuation will almost certainly fail.

I suspect that the real issue here, for Dennett, is the suggestion of some sort of reductionism. But reductionism to what? If Mary is allowed to believe things like, “Most yellows typically look brighter than most blue things,” then the limit is irrelevant, and Mary is allowed to know anything that people usually know about colors. But if the meaning is that Mary knows this only in a mathematical sense, that is, that she can have beliefs about certain mathematical properties of light and surfaces, rather than beliefs that are explicitly about blue and yellow things, then it will be a real limitation, and this limitation would cause his continuation to fail. We have basically the same issue here that I discussed in relation to Robin Hanson on consciousness earlier. If all of Mary’s statements are mathematical statements, then of course she will not know everything that people know about color. “Blue is not yellow” is not a mathematical statement, and it is something that we know about color. So we already know from the beginning that not all the knowledge that can be had about color is mathematical. Dennett might want to insist that it is “physical,” and surely blue and yellow are properties of physical things. If that is all he intends to say, namely that the properties she knows are properties of physical things, there is no problem here, but it does look like he intends to push further, to the point of possibly asserting something that would be evidently false.

Truth and Expectation

Suppose I see a man approaching from a long way off. “That man is pretty tall,” I say to a companion. The man approaches, and we meet him. Now I can see how tall he is. Suppose my companion asks, “Were you right that the man is pretty tall, or were you mistaken?”

“Pretty tall,” of course, is itself “pretty vague,” and there surely is not some specific height in inches that would be needed in order for me to say that I was right. What then determines my answer? Again, I might just respond, “It’s hard to say.” But in some situations I would say, “yes, I was definitely right,” or “no, I was definitely wrong.” What are those situations?

Psychologically, I am likely to determine the answer by how I feel about what I know about the man’s height now, compared to what I knew in advance. If I am surprised at how short he is, I am likely to say that I was wrong. And if I am not surprised at all by his height, or if I am surprised at how tall he is, then I am likely to say that I was right. So my original pretty vague statement ends up being made somewhat more precise by being placed in relationship with my expectations. Saying that he is pretty tall implies that I have certain expectations about his height, and if those expectations are verified, then I will say that I was right, and if those expectations are falsified, at least in a certain direction, then I will say that I was wrong.

This might suggest a theory like logical positivism. The meaning of a statement seems to be defined by the expectations that it implies. But it seems easy to find a decisive refutation of this idea. “There are stars outside my past and future light cones,” for example, is undeniably meaningful, and we know what it means, but it does not seem to imply any particular expectations about what is going to happen to me.

But perhaps we should simply somewhat relax the claim about the relationship between meaning and expectations, rather than entirely retracting it. Consider the original example. Obviously, when I say, “that man is pretty tall,” the statement is a statement about the man. It is not a statement about what is going to happen to me. So it is incorrect to say that the meaning of the statement is the same as my expectations. Nonetheless, the meaning in the example receives something, at the least some of its precision, from my expectations. Different people will be surprised by different heights in such a case, and it will be appropriate to say that they disagree somewhat about the meaning of “pretty tall.” But not because they had some logical definition in their minds which disagreed with the definition in someone’s else’s mind. Instead, the difference of meaning is based on the different expectations themselves.

But does a statement always receive some precision in its meaning from expectation, or are there cases where nothing at all is received from one’s expectations? Consider the general claim that “X is true.” This in fact implies some expectations: I do not expect “someone omniscient will tell me that X is false.” I do not expect that “someone who finds out the truth about X will tell me that X is false.” I do not expect that “I will discover the truth about X and it will turn out that it was false.” Note that these expectations are implied even in cases like the claim about the stars and my future light cone. Now the hopeful logical positivist might jump in at this point and say, “Great. So why can’t we go back to the idea that meaning is entirely defined by expectations?” But returning to that theory would be cheating, so to speak, because these expectations include the abstract idea of X being true, so this must be somehow meaningful apart from these particular expectations.

These expectations do, however, give the vaguest possible framework in which to make a claim at all. And people do, sometimes, make claims with little expectation of anything besides these things, and even with little or no additional understanding of what they are talking about. For example, in the cases that Robin Hanson describes as “babbling,” the person understands little of the implications of what he is saying except the idea that “someone who understood this topic would say something like this.” Thus it seems reasonable to say that expectations do always contribute something to making meaning more precise, even if they do not wholly constitute one’s meaning. And this consequence seems pretty natural if it is true that expectation is itself one of the most fundamental activities of a mind.

Nonetheless, the precision that can be contributed in this way will never be an infinite precision, because one’s expectations themselves cannot be defined with infinite precision. So whether or not I am surprised by the man’s height in the original example, may depend in borderline cases on what exactly happens during the time between my original assessment and the arrival of the man. “I will be surprised” or “I will not be surprised” are in themselves contingent facts which could depend on many factors, not only on the man’s height. Likewise, whether or not my state actually constitutes surprise will itself be something that has borderline cases.

Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount of trade, if you desire both truth and other things. Eliezer Yudkowsky argues that self-deception, and therefore also such trade, is either impossible or stupid, depending on how it is understood:

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.”  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

A better attitude to this matter is adopted by Robin Hanson, as for example when he discusses motives for having opinions in a post which we previously considered here. Bryan Caplan has a similar view, discussed here.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.

Zombies and Ignorance of the Formal Cause

Let’s look again at Robin Hanson’s account of the human mind, considered previously here.

Now what I’ve said so far is usually accepted as uncontroversial, at least when applied to the usual parts of our world, such as rivers, cars, mountains laptops, or ants. But as soon as one claims that all this applies to human minds, suddenly it gets more controversial. People often state things like this:

I am sure that I’m not just a collection of physical parts interacting, because I’m aware that I feel. I know that physical parts interacting just aren’t the kinds of things that can feel by themselves. So even though I have a physical body made of parts, and there are close correlations between my feelings and the states of my body parts, there must be something more than that to me (and others like me). So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care mainly about feelings, not physical parts interacting; we want to know what out there feels so we can know what to care about.

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

What would someone mean by making the original statement that “I know that physical parts interacting just aren’t the kinds of things that can feel by themselves”? If we give this a charitable interpretation, the meaning is that “a collection of physical parts” is something many, and so is not a suitable subject for predicates like “sees” and “understands.” Something that sees is something one, and something that understands is something one.

This however is not Robin’s interpretation. Instead, he understands it to mean that besides the physical parts, there has to be one additional part, namely one which is a part in the same sense of “part”, but which is not physical. And indeed, some tend to think this way. But this of course is not helpful, because the reason a collection of parts is not a suitable subject for seeing or understanding is not because those parts are physical, but because the subject is not something one. And this would remain even if you add a non-physical part or parts. Instead, what is needed to be such a subject is that the subject be something one, namely a living being with the sense of sight, in order to see, or one with the power of reason, for understanding.

What do you need in order to get one such subject from “a collection of parts”? Any additional part, physical or otherwise, will just make the collection bigger; it will not make the subject something one. It is rather the formal cause of a whole that makes the parts one, and this formal cause is not a part in the same sense. It is not yet another part, even a non-physical one.

Reading Robin’s discussion in this light, it is clear that he never even considers formal causes. He does not even ask whether there is such a thing. Rather, he speaks only of material and efficient causes, and appears to be entirely oblivious even to the idea of a formal cause. Thus when asking whether there is anything in addition to the “collection of parts,” he is asking whether there is any additional material cause. And naturally, nothing will have material causes other than the things it is made out of, since “what a thing is made out of” is the very meaning of a material cause.

Likewise, when he says, “Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?”, he shows in two ways his ignorance of formal causes. First, by talking about “feeling stuff,” which implies a kind of material cause. Second, when he says, “actual cause of humans making statements” he is evidently speaking about the efficient cause of people producing sounds or written words.

In both cases, formal causality is the relevant causality. There is no “feeling stuff” at all; rather, certain things are things like seeing or understanding, which are unified actions, and these are unified by their forms. Likewise, we can consider the “humans making statements” in two ways; if we simply consider the efficient causes of the sounds, one by one, you might indeed explain them as “simple parts interacting simply.” But they are not actually mere sounds; they are meaningful and express the intention and meaning of a subject. And they have meaning by reason of the forms of the action and of the subject.

In other words, the idea of the philosophical zombie is that the zombie is indeed producing mere sounds. It is not only that the zombie is not conscious, but rather that it really is just interacting parts, and the sounds it produces are just a collection of sounds. We don’t need, then, some complicated method to determine that we are not such zombies. We are by definition not zombies if we say, think, or understanding at all.

The same ignorance of the formal cause is seen in the rest of Robin’s comments:

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

Again, he is asking whether there is some additional part which has some additional efficient causality, and suggesting that this is unlikely. It is indeed unlikely, but irrelevant, because consciousness is not an additional part, but a formal way of being that a thing has. He continues:

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

First, there is no “extra feeling stuff.” There is only a way of being, namely in this case being alive and conscious. Second, there is no coincidence. Robin’s supposed coincidence is that “I am conscious” is thought to mean, “I have feeling stuff,” but the feeling stuff is not the efficient cause of my saying that I have it; instead, the efficient cause is said to be simple parts interacting simply.

Again, the mistake here is simply to completely overlook the formal cause. “I am conscious” does not mean that I have any feeling stuff; it says that I am something that perceives. Of course we can modify Robin’s question: what is the efficient cause of my saying that I am conscious? Is it the fact that I actually perceive things, or is it simple parts interacting simply? But if we think of this in relation to form, it is like asking whether the properties of a square follow from squareness, or from the properties of the parts of a square. And it is perfectly obvious that the properties of a square follow both from squareness, and from the properties of the parts of a square, without any coincidence, and without interfering with one another. In the same way, the fact that I perceive things is the efficient cause of my saying that I perceive things. But the only difference between this actual situation and a philosophical zombie is one of form, not of matter; in a corresponding zombie, “simple parts interacting simply” are the cause of its producing sounds, but it neither perceives anything nor asserts that it is conscious, since its words are meaningless.

The same basic issue, namely Robin’s lack of the concept of a formal cause, is responsible for his statements about philosophical zombies:

Carroll inspires me to try to make one point I think worth making, even if it is also ignored. My target is people who think philosophical zombies make sense. Zombies are supposedly just like real people in having the same physical brains, which arose the through the same causal history. The only difference is that while real people really “feel”, zombies do not. But since this state of “feeling” is presumed to have zero causal influence on behavior, zombies act exactly like real people, including being passionate and articulate about claiming they are not zombies. People who think they can conceive of such zombies see a “hard question” regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel. (And which other systems feel as well.)

The one point I want to make is: if zombies are conceivable, then none of us will ever have any more relevant info than we do now about which systems actually feel. Which is pretty much zero info! You will never have any info about whether you ever really felt in the past, or will ever feel in the future. No one part of your brain ever gets any info from any other part of your brain about whether it really feels.

The state of “feeling” is not presumed to have zero causal influence on behavior. It is thought to have precisely a formal influence on behavior. That is, being conscious is why the activity of the conscious person is “saying that they feel” instead of “producing random meaningless sounds that others mistakenly interpret as meaning that they feel.”

Robin is right that philosophical zombies are impossible, however, although not for the reasons that he supposes. The actual reason for this is that it is impossible for a disposed matter to be lacking its corresponding form, and the idea of a zombie is precisely the idea of humanly disposed matter lacking human form.

Regarding his point about “info,” the possession of any information at all is already a proof that one is not a zombie. Since the zombie lacks form, any correlation between one part and another in it is essentially a random material correlation, not one that contains any information. If the correlation is noticed as having any info, then the thing noticing the information, and the information itself, are things which possess form. This argument, as far as it goes, is consistent with Robin’s claim that zombies do not make sense; they do not, but not for the reasons that he posits.

Zeal for Form, But Not According to Knowledge

Some time ago I discussed the question of whether the behavior of a whole should be predictable from the behavior of the parts, without fully resolving it. I promised at the time to revisit the question later, and this is the purpose of the present post.

In the discussion of Robin Hanson’s book Age of Em, we looked briefly at his account of the human mind. Let us look at a more extended portion of his argument about the mind:

There is nothing that we know of that isn’t described well by physics, and everything that physicists know of is well described as many simple parts interacting simply. Parts are localized in space, have interactions localized in time, and interactions effects don’t move in space faster than the speed of light. Simple parts have internal states that can be specified with just a few bits (or qubits), and each part only interacts directly with a few other parts close in space and time. Since each interaction is only between a few bits on a few sides, it must also be simple. Furthermore, all known interactions are mutual in the sense that the state on all sides is influenced by states of the other sides.

For example, ordinary field theories have a limited number of fields at each point in space-time, with each field having a limited number of degrees of freedom. Each field has a few simple interactions with other fields, and with its own space-time derivatives. With limited energy, this latter effect limits how fast a field changes in space and time.

As a second example, ordinary digital electronics is made mostly of simple logic units, each with only a few inputs, a few outputs, and a few bits of internal state. Typically: two inputs, one output, and zero or one bits of state. Interactions between logic units are via simple wires that force the voltage and current to be almost the same at matching ends.

As a third example, cellular automatons are often taken as a clear simple metaphor for typical physical systems. Each such automation has a discrete array of cells, each of which has a few possible states. At discrete time steps, the state of each cell is a simple standard function of the states of that cell and its neighbors at the last time step. The famous “game of life” uses a two dimensional array with one bit per cell.

This basic physics fact, that everything is made of simple parts interacting simply, implies that anything complex, able to represent many different possibilities, is made of many parts. And anything able to manage complex interaction relations is spread across time, constructed via many simple interactions built up over time. So if you look at a disk of a complex movie, you’ll find lots of tiny structures encoding bits. If you look at an organism that survives in a complex environment, you’ll find lots of tiny parts with many non-regular interactions.

Physicists have learned that we only we ever get empirical evidence about the state of things via their interactions with other things. When such interactions the state of one thing create correlations with the state of another, we can use that correlation, together with knowledge of one state, as evidence about the other state. If a feature or state doesn’t influence any interactions with familiar things, we could drop it from our model of the world and get all the same predictions. (Though we might include it anyway for simplicity, so that similar parts have similar features and states.)

Not only do we know that in general everything is made of simple parts interacting simply, for pretty much everything that happens here on Earth we know those parts and interactions in great precise detail. Yes there are still some areas of physics we don’t fully understand, but we also know that those uncertainties have almost nothing to say about ordinary events here on Earth. For humans and their immediate environments on Earth, we know exactly what are all the parts, what states they hold, and all of their simple interactions. Thermodynamics assures us that there can’t be a lot of hidden states around holding many bits that interact with familiar states.

Now it is true that when many simple parts are combined into complex arrangements, it can be very hard to calculate the detailed outcomes they produce. This isn’t because such outcomes aren’t implied by the math, but because it can be hard to calculate what math implies. When we can figure out quantities that are easier to calculate, as long as the parts and interactions we think are going on are in fact the only things going on, then we usually see those quantities just as calculated.

The point of Robin’s argument is to take a particular position in regard to the question we are revisiting in this post: everything that is done by wholes is predictable from the behavior of the parts. The argument is simply a more extended form of a point I made in the earlier post, namely that there is no known case where the behavior of a whole is known not to be predictable in such a way, and many known cases where it is certainly predictable in this way.

The title of the present post of course refers us to this earlier post. In that post I discussed the tendency to set first and second causes in opposition, and noted that the resulting false dichotomy leads to two opposite mistakes, namely the denial of a first cause on one hand, and to the assertion that the first cause does or should work without secondary causes on the other.

In the same way, I say it is a false dichotomy to set the work of form in opposition with the work of matter and disposition. Rather, they produce the same thing, both according to being and according to activity, but in different respects. If this is the case, it will be necessarily true from the nature of things that the behavior of a whole is predictable from the behavior of the parts, but this will happen in a particular way.

I mentioned an example of the same false dichotomy in the post on Robin’s book. Here again is his argument:

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

I am currently awake and conscious, hearing the sounds of my keyboard as I type and the music playing in the background. Robin’s argument is something like this: why did I type the previous sentence? Is it because I am in fact awake and conscious and actually heard these sounds? If in principle it is predictable that I would have typed that, based on the simple interactions of simple parts, that seems to be an entirely different explanation. So either one might be the case or the other, but not both.

We have seen this kind of argument before. C.S. Lewis made this kind of argument when he said that thought must have reasons only, and no causes. Similarly, there is the objection to the existence of God, “But it seems that everything we see in the world can be accounted for by other principles, supposing God did not exist.” Just as in those cases we have a false dichotomy between the first cause and secondary causes, and between the final cause and efficient causes, so here we have a false dichotomy between form and matter.

Let us consider this in a simpler case. We earlier discussed the squareness of a square. Suppose someone attempted to apply Robin’s argument to squares. The equivalent argument would say this: all conclusions about squares can be proved from premises about the four lines that make it up and their relationships. So what use is this extra squareness? We might as well assume it does not exist, since it cannot explain anything.

In order to understand this one should consider why we need several kinds of cause in the first place. To assign a cause is just to give the origin of a thing in a way that explains it, while explanation has various aspects. In the linked post, we divided causes into two, namely intrinsic and extrinsic, and then divided each of these into two. But consider what would happen if we did not make the second division. In this case, there would be two causes of a thing: matter subject to form, and agent intending an end. We can see from this how the false dichotomies arise: all the causality of the end must be included in some way in the agent, since the end causes by informing the agent, and all the causality of the form must be included in some way in the matter, since the form causes by informing the matter.

In the case of the square, even the linked post noted that there was an aspect of the square that could not be derived from its properties: namely, the fact that a square is one figure, rather than simply many lines. This is the precise effect of form in general: to make a thing be what it is.

Consider Alexander Pruss’s position on artifacts. He basically asserted that artifacts do not truly exist, on the grounds that they seem to be lacking a formal cause. In this way, he says, they are just a collection of parts, just as someone might suppose that a square is just a collection of lines, and that there is no such thing as squareness. My response there was the same as my response about the square: saying that this is just a collection cannot explain why a square is one figure, nor can the same account explain the fact that artifacts do have a unity of some kind. Just as the denial of squareness would mean the denial of the existence of a unified figure, so the denial of chairness would mean the denial of the existence of chairs. Unlike Sean Carroll, Pruss seems even to recognize that this denial follows from his position, even if he is ambivalent about it at times.

Hanson’s argument about the human mind is actually rather similar to Pruss’s argument about artifacts, and to Carroll’s argument about everything. The question of whether or not the fact that I am actually conscious influences whether I say that I am, is a reference to the idea of a philosophical zombie. Robin discusses this idea more directly in another post:

Carroll inspires me to try to make one point I think worth making, even if it is also ignored. My target is people who think philosophical zombies make sense. Zombies are supposedly just like real people in having the same physical brains, which arose the through the same causal history. The only difference is that while real people really “feel”, zombies do not. But since this state of “feeling” is presumed to have zero causal influence on behavior, zombies act exactly like real people, including being passionate and articulate about claiming they are not zombies. People who think they can conceive of such zombies see a “hard question” regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel. (And which other systems feel as well.)

The one point I want to make is: if zombies are conceivable, then none of us will ever have any more relevant info than we do now about which systems actually feel. Which is pretty much zero info! You will never have any info about whether you ever really felt in the past, or will ever feel in the future. No one part of your brain ever gets any info from any other part of your brain about whether it really feels.

These claims all follow from our very standard and well-established info theory. We get info about things by interacting with them, so that our states become correlated with the states of those things. But by assumption this hypothesized extra “feeling” state never interacts with anything. The actual reason why you feel compelled to assert very confidently that you really do feel has no causal connection with whether you actually do really feel. You would have been just as likely to say it if it were not true. What could possibly be the point of hypothesizing and forming beliefs about states about which one can never get any info?

We noted the unresolved tension in Sean Carroll’s position. The eliminativists are metaphysically correct, he says, but they are mistaken to draw the conclusion that the things of our common experience do not exist. The problem is that given that he accepts the eliminativist metaphysics, he can have no justification for rejecting their conclusions. We can see the same tension in Robin Hanson’s account of consciousness and philosophical zombies. For example, why does he say that they do not “make sense,” rather than asking whether or not they can exist and why or why not?

Let us think about this in more detail. And to see more clearly the issues involved, let us consider a simpler case. Take the four chairs in Pruss’s office. Is it possible that one of them is a zombie?

What would this even mean? In the post on the relationship of form and reality, we noted that asking whether something has a form is very close to the question of whether something is real. I really have two hands, Pruss says, if my hands have forms. And likewise chairs are real chairs if they have the form of a chair, and if they do not, they are not real in the first place, as Pruss argues is the case.

The zombie question about the chair would then be this: is it possible that one of the apparent chairs, physically identical to a real chair, is yet not a real chair, while the three others are real?

We should be able to understand why someone would want to say that the question “does not make sense” here. What would it even be like for one of the chairs not to be a real chair, especially if it is posited to be identical to all of the others? In reality, though, the question does make sense, even if we answer that the thing cannot happen. In this case it might actually be more possible than in other cases, since artifacts are in part informed by human intentions. But possible or not, the question surely makes sense.

Let us consider the case of natural things. Consider the zombie oak tree: it is physically identical to an oak tree, but it is not truly alive. It appears to grow, but this is just the motion of particles. There are three positions someone could hold: no oak trees are zombie oaks, since all are truly alive and grow; all oak trees are zombies, since all are mere collections of particles; and some are alive and grow, while others are zombies, being mere collections of particles.

Note that the question does indeed make sense. It is hard to see why anyone would accept the third position, but if the first and second positions make sense, then the third does as well. It has an intelligible content, even if it is one that we have no good arguments for accepting. The argument that it does not make sense is basically the claim that the first and second positions are not distinct positions: they do not say different things, but the same thing. Thus the the third would “not make sense” insofar as it assumes that the first and second positions are distinct positions.

Why would someone suppose that the first and second positions are not distinct? This is basically Sean Carroll’s position, since he tries to say both that eliminativists are correct about what exists, but incorrect in denying the existence of common sense things like oak trees. It is useful to say, “oak trees are real,” he says, and therefore we will say it, but we do not mean to say something different about reality than the eliminativists who say that “oak trees are not real but mere collections of particles.”

But this is wrong. Carroll’s position is inconsistent in virtually the most direct possible way. Either oak trees are real or they are not; and if they are real, then they are not mere collections of particles. So both the first and second positions are meaningful, and consequently also the third.

The second and third positions are false, however, and the meaningfulness of this becomes especially clear when we speak of the human case. It obviously does make sense to ask whether other human beings are conscious, and this is simply to ask whether their apparent living activities, such as speaking and thinking, are real living activities, or merely apparent ones: perhaps the thing is making sounds, but it is not truly speaking or thinking.

Let us go back to the oak tree for a moment. The zombie oak would be one that is not truly living, but its activities, apparently full of life, are actually lifeless. In order to avoid this possibility, and out of a zeal for form which is not according to knowledge, some assert that the activities of an oak cannot be understood in terms of the activities of the parts. There is a hint of this, perhaps, in this remark by James Chastek:

Consciousness is just the latest field where we are protesting that something constitutes a specific difference from some larger genus, but if it goes the way the others have gone, in fifty years no one will even remember the controversy or bother to give the fig-leaf explanations of it being emergent or reductive. No one will remember that there is a difference to explain. Did anyone notice in tenth-grade biology that life was explained entirely in terms of non-living processes? No. There was nothing to explain since nothing was noticed.

Chastek does not assert that life cannot be “explained entirely in terms of non-living processes,” in the manner of tenth-grade biology, but he perhaps would prefer that it could not be so explained. And the reason for this would be the idea that if everything the living thing does can be explained in terms of the parts, then oak trees are zombies after all.

But this idea is mistaken. Look again at the square: the parts explain everything, except the fact that the figure is one figure, and a square. The form of a square is indeed needed, precisely in order that the thing will actually be a whole and a square.

Likewise with the oak. If an oak tree is made out of parts, then since activity follows being, it should be unsurprising that in some sense its activities themselves will be made out of parts, namely the activities of its parts. But the oak is real, and its activities are real. And just as oaks really exist, so they really live and grow; but just as the living oak has parts which are not alive in themselves, such as elements, so the activity of growth contains partial activities which are not living activities in themselves. What use is the form of an oak, then? It makes the tree really an oak and really alive; and it makes its activities living activities such as growth, rather than being merely a collection of non-living activities.

We can look at human beings in the same way, but I will leave the details of this for another post, since this one is long enough already.