Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.

Advertisements

Zombies and Ignorance of the Formal Cause

Let’s look again at Robin Hanson’s account of the human mind, considered previously here.

Now what I’ve said so far is usually accepted as uncontroversial, at least when applied to the usual parts of our world, such as rivers, cars, mountains laptops, or ants. But as soon as one claims that all this applies to human minds, suddenly it gets more controversial. People often state things like this:

I am sure that I’m not just a collection of physical parts interacting, because I’m aware that I feel. I know that physical parts interacting just aren’t the kinds of things that can feel by themselves. So even though I have a physical body made of parts, and there are close correlations between my feelings and the states of my body parts, there must be something more than that to me (and others like me). So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care mainly about feelings, not physical parts interacting; we want to know what out there feels so we can know what to care about.

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

What would someone mean by making the original statement that “I know that physical parts interacting just aren’t the kinds of things that can feel by themselves”? If we give this a charitable interpretation, the meaning is that “a collection of physical parts” is something many, and so is not a suitable subject for predicates like “sees” and “understands.” Something that sees is something one, and something that understands is something one.

This however is not Robin’s interpretation. Instead, he understands it to mean that besides the physical parts, there has to be one additional part, namely one which is a part in the same sense of “part”, but which is not physical. And indeed, some tend to think this way. But this of course is not helpful, because the reason a collection of parts is not a suitable subject for seeing or understanding is not because those parts are physical, but because the subject is not something one. And this would remain even if you add a non-physical part or parts. Instead, what is needed to be such a subject is that the subject be something one, namely a living being with the sense of sight, in order to see, or one with the power of reason, for understanding.

What do you need in order to get one such subject from “a collection of parts”? Any additional part, physical or otherwise, will just make the collection bigger; it will not make the subject something one. It is rather the formal cause of a whole that makes the parts one, and this formal cause is not a part in the same sense. It is not yet another part, even a non-physical one.

Reading Robin’s discussion in this light, it is clear that he never even considers formal causes. He does not even ask whether there is such a thing. Rather, he speaks only of material and efficient causes, and appears to be entirely oblivious even to the idea of a formal cause. Thus when asking whether there is anything in addition to the “collection of parts,” he is asking whether there is any additional material cause. And naturally, nothing will have material causes other than the things it is made out of, since “what a thing is made out of” is the very meaning of a material cause.

Likewise, when he says, “Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?”, he shows in two ways his ignorance of formal causes. First, by talking about “feeling stuff,” which implies a kind of material cause. Second, when he says, “actual cause of humans making statements” he is evidently speaking about the efficient cause of people producing sounds or written words.

In both cases, formal causality is the relevant causality. There is no “feeling stuff” at all; rather, certain things are things like seeing or understanding, which are unified actions, and these are unified by their forms. Likewise, we can consider the “humans making statements” in two ways; if we simply consider the efficient causes of the sounds, one by one, you might indeed explain them as “simple parts interacting simply.” But they are not actually mere sounds; they are meaningful and express the intention and meaning of a subject. And they have meaning by reason of the forms of the action and of the subject.

In other words, the idea of the philosophical zombie is that the zombie is indeed producing mere sounds. It is not only that the zombie is not conscious, but rather that it really is just interacting parts, and the sounds it produces are just a collection of sounds. We don’t need, then, some complicated method to determine that we are not such zombies. We are by definition not zombies if we say, think, or understanding at all.

The same ignorance of the formal cause is seen in the rest of Robin’s comments:

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

Again, he is asking whether there is some additional part which has some additional efficient causality, and suggesting that this is unlikely. It is indeed unlikely, but irrelevant, because consciousness is not an additional part, but a formal way of being that a thing has. He continues:

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

First, there is no “extra feeling stuff.” There is only a way of being, namely in this case being alive and conscious. Second, there is no coincidence. Robin’s supposed coincidence is that “I am conscious” is thought to mean, “I have feeling stuff,” but the feeling stuff is not the efficient cause of my saying that I have it; instead, the efficient cause is said to be simple parts interacting simply.

Again, the mistake here is simply to completely overlook the formal cause. “I am conscious” does not mean that I have any feeling stuff; it says that I am something that perceives. Of course we can modify Robin’s question: what is the efficient cause of my saying that I am conscious? Is it the fact that I actually perceive things, or is it simple parts interacting simply? But if we think of this in relation to form, it is like asking whether the properties of a square follow from squareness, or from the properties of the parts of a square. And it is perfectly obvious that the properties of a square follow both from squareness, and from the properties of the parts of a square, without any coincidence, and without interfering with one another. In the same way, the fact that I perceive things is the efficient cause of my saying that I perceive things. But the only difference between this actual situation and a philosophical zombie is one of form, not of matter; in a corresponding zombie, “simple parts interacting simply” are the cause of its producing sounds, but it neither perceives anything nor asserts that it is conscious, since its words are meaningless.

The same basic issue, namely Robin’s lack of the concept of a formal cause, is responsible for his statements about philosophical zombies:

Carroll inspires me to try to make one point I think worth making, even if it is also ignored. My target is people who think philosophical zombies make sense. Zombies are supposedly just like real people in having the same physical brains, which arose the through the same causal history. The only difference is that while real people really “feel”, zombies do not. But since this state of “feeling” is presumed to have zero causal influence on behavior, zombies act exactly like real people, including being passionate and articulate about claiming they are not zombies. People who think they can conceive of such zombies see a “hard question” regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel. (And which other systems feel as well.)

The one point I want to make is: if zombies are conceivable, then none of us will ever have any more relevant info than we do now about which systems actually feel. Which is pretty much zero info! You will never have any info about whether you ever really felt in the past, or will ever feel in the future. No one part of your brain ever gets any info from any other part of your brain about whether it really feels.

The state of “feeling” is not presumed to have zero causal influence on behavior. It is thought to have precisely a formal influence on behavior. That is, being conscious is why the activity of the conscious person is “saying that they feel” instead of “producing random meaningless sounds that others mistakenly interpret as meaning that they feel.”

Robin is right that philosophical zombies are impossible, however, although not for the reasons that he supposes. The actual reason for this is that it is impossible for a disposed matter to be lacking its corresponding form, and the idea of a zombie is precisely the idea of humanly disposed matter lacking human form.

Regarding his point about “info,” the possession of any information at all is already a proof that one is not a zombie. Since the zombie lacks form, any correlation between one part and another in it is essentially a random material correlation, not one that contains any information. If the correlation is noticed as having any info, then the thing noticing the information, and the information itself, are things which possess form. This argument, as far as it goes, is consistent with Robin’s claim that zombies do not make sense; they do not, but not for the reasons that he posits.