Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.

Advertisements

Idealized Idealization

On another occasion, I discussed the Aristotelian idea that the act of the mind does not use an organ. In an essay entitled Immaterial Aspects of Thought, James Ross claims that he can establish the truth of this position definitively. He summarizes the argument:

Some thinking (judgment) is determinate in a way no physical process can be. Consequently, such thinking cannot be (wholly) a physical process. If all thinking, all judgment, is determinate in that way, no physical process can be (the whole of) any judgment at all. Furthermore, “functions” among physical states cannot be determinate enough to be such judgments, either. Hence some judgments can be neither wholly physical processes nor wholly functions among physical processes.

Certain thinking, in a single case, is of a definite abstract form (e.g. N x N = N²), and not indeterminate among incompossible forms (see I below). No physical process can be that definite in its form in a single case. Adding cases even to infinity, unless they are all the possible cases, will not exclude incompossible forms. But supplying all possible cases of any pure function is impossible. So, no physical process can exclude incompossible functions from being equally well (or badly) satisfied (see II below). Thus, no physical process can be a case of such thinking. The same holds for functions among physical states (see IV below).

In essence, the argument is that squaring a number and similar things are infinitely precise processes, and no physical process is infinitely precise. Therefore squaring a number and similar things are not physical processes.

The problem is unfortunately with the major premise here. Squaring a number, and similar things, in the way that we in fact do them, are not infinitely precise processes.

Ross argues that they must be:

Can judgments really be of such definite “pure” forms? They have to be; otherwise, they will fail to have the features we attribute to them and upon which the truth of certain judgments about validity, inconsistency, and truth depend; for instance, they have to exclude incompossible forms or they would lack the very features we take to be definitive of their sorts: e.g., conjunction, disjunction, syllogistic, modus ponens, etc. The single case of thinking has to be of an abstract “form” (a “pure” function) that is not indeterminate among incompossible ones. For instance, if I square a number–not just happen in the course of adding to write down a sum that is a square, but if I actually square the number–I think in the form “N x N = N².”

The same point again. I can reason in the form, modus ponens (“If p then q“; “p“; “therefore, q”). Reasoning by modus ponens requires that no incompossible forms also be “realized” (in the same sense) by what I have done. Reasoning in that form is thinking in a way that is truth-preserving for all cases that realize the form. What is done cannot, therefore, be indeterminate among structures, some of which are not truth preserving. That is why valid reasoning cannot be only an approximation of the form, but must be of the form. Otherwise, it will as much fail to be truth-preserving for all relevant cases as it succeeds; and thus the whole point of validity will be lost. Thus, we already know that the evasion, “We do not really conjoin, add, or do modus ponens but only simulate them,” cannot be correct. Still, I shall consider it fully below.

“It will as much fail to be truth-preserving for all relevant cases as it succeeds” is an exaggeration here. If you perform an operation which approximates modus ponens, then that operation will be approximately truth preserving. It will not be equally truth preserving and not truth preserving.

I have noted many times in the past, as for example here, here, here, and especially here, that following the rules of syllogism does not in practice infallibly guarantee that your conclusions are true, even if your premises are in some way true, because of the vagueness of human thought and language. In essence, Ross is making a contrary argument: we know, he is claiming, that our arguments infallibly succeed; therefore our thoughts cannot be vague. But it is empirically false that our arguments infallibly succeed, so the argument is mistaken right from its starting point.

There is also a strawmanning of the opposing position here insofar as Ross describes those who disagree with him as saying that “we do not really conjoin, add, or do modus ponens but only simulate them.” This assumes that unless you are doing these things perfectly, rather than approximating them, then you are not doing them at all. But this does not follow. Consider a triangle drawn on a blackboard. Consider which of the following statements is true:

  1. There is a triangle drawn on the blackboard.
  2. There is no triangle drawn on the blackboard.

Obviously, the first statement is true, and the second false. But in Ross’s way of thinking, we would have to say, “What is on the blackboard is only approximately triangular, not exactly triangular. Therefore there is no triangle on the blackboard.” This of course is wrong, and his description of the opposing position is wrong in the same way.

Naturally, if we take “triangle” as shorthand for “exact rather than approximate triangle” then (2) will be true. And in a similar way, if take “really conjoin” and so on as shorthand for “really conjoin exactly and not approximately,” then those who disagree will indeed say that we do not do those things. But this is not a problem unless you are assuming from the beginning that our thoughts are infinitely precise, and Ross is attempting to establish that this must be the case, rather than claiming to take it as given. (That is, the summary takes it as given, but Ross attempts throughout the article to establish it.)

One could attempt to defend Ross’s position as follows: we must have infinitely precise thoughts, because we can understand the words “infinitely precise thoughts.” Or in the case of modus ponens, we must have an infinitely precise understanding of it, because we can distinguish between “modus ponens, precisely,” and “approximations of modus ponens“. But the error here is similar to the error of saying that one must have infinite certainty about some things, because otherwise one will not have infinite certainty about the fact that one does not have infinite certainty, as though this were a contradiction. It is no contradiction for all of your thoughts to be fallible, including this one, and it is no contradiction for all of your thoughts to be vague, including your thoughts about precision and approximation.

The title of this post in fact refers to this error, which is probably the fundamental problem in Ross’s argument. Triangles in the real world are not perfectly triangular, but we have an idealized concept of a triangle. In precisely the same way, the process of idealization in the real world is not an infinitely precise process, but we have an idealized concept of idealization. Concluding that our acts of idealization must actually be ideal in themselves, simply because we have an idealized concept of idealization, would be a case of confusing the way of knowing with the way of being. It is a particularly confusing case simply because the way of knowing in this case is also materially the being which is known. But this material identity does not make the mode of knowing into the mode of being.

We should consider also Ross’s minor premise, that a physical process cannot be determinate in the way required:

Whatever the discriminable features of a physical process may be, there will always be a pair of incompatible predicates, each as empirically adequate as the other, to name a function the exhibited data or process “satisfies.” That condition holds for any finite actual “outputs,” no matter how many. That is a feature of physical process itself, of change. There is nothing about a physical process, or any repetitions of it, to block it from being a case of incompossible forms (“functions”), if it could be a case of any pure form at all. That is because the differentiating point, the point where the behavioral outputs diverge to manifest different functions, can lie beyond the actual, even if the actual should be infinite; e.g., it could lie in what the thing would have done, had things been otherwise in certain ways. For instance, if the function is x(*)y = (x + y, if y < 10^40 years, = x + y +1, otherwise), the differentiating output would lie beyond the conjectured life of the universe.

Just as rectangular doors can approximate Euclidean rectangularity, so physical change can simulate pure functions but cannot realize them. For instance, there are no physical features by which an adding machine, whether it is an old mechanical “gear” machine or a hand calculator or a full computer, can exclude its satisfying a function incompatible with addition, say quaddition (cf. Kripke’s definition of the function to show the indeterminacy of the single case: quus, symbolized by the plus sign in a circle, “is defined by: x quus y = x + y, if x, y < 57, =5 otherwise”) modified so that the differentiating outputs (not what constitutes the difference, but what manifests it) lie beyond the lifetime of the machine. The consequence is that a physical process is really indeterminate among incompatible abstract functions.

Extending the list of outputs will not select among incompatible functions whose differentiating “point” lies beyond the lifetime (or performance time) of the machine. That, of course, is not the basis for the indeterminacy; it is just a grue-like illustration. Adding is not a sequence of outputs; it is summing; whereas if the process were quadding, all its outputs would be quadditions, whether or not they differed in quantity from additions (before a differentiating point shows up to make the outputs diverge from sums).

For any outputs to be sums, the machine has to add. But the indeterminacy among incompossible functions is to be found in each single case, and therefore in every case. Thus, the machine never adds.

There is some truth here, and some error here. If we think about a physical process in the particular way that Ross is considering it, it will be true that it will always be able to be interpreted in more than one way. This is why, for example, in my recent discussion with John Nerst, John needed to say that the fundamental cause of things had to be “rules” rather than e.g. fundamental particles. The movement of particles, in itself, could be interpreted in various ways. “Rules,” on the other hand, are presumed to be something which already has a particular interpretation, e.g. adding as opposed to quadding.

On the other hand, there is also an error here. The prima facie sign of this error is the statement that an adding machine “never adds.” Just as according to common sense we can draw triangles on blackboards, so according to common sense the calculator on my desk can certainly add. This is connected with the problem with the entire argument. Since “the calculator can add” is true in some way, there is no particular reason that “we can add” cannot be true in precisely the same way. Ross wishes to argue that we can add in a way that the calculator cannot because, in essence, we do it infallibly; but this is flatly false. We do not do it infallibly.

Considered metaphysically, the problem here is ignorance of the formal cause. If physical processes were entirely formless, they indeed would have no interpretation, just as a formless human (were that possible) would be a philosophical zombie. But in reality there are forms in both cases. In this sense, Ross’s argument comes close to saying “human thought is a form or formed, but physical processes are formless.” Since in fact neither is formless, there is no reason (at least established by this argument) why thought could not be the form of a physical process.

 

An Existential Theory of Relativity

Paul Almond suggests a kind of theory of relativity applied to existence (section 3.1):

It makes sense to view reality in terms of an observer-centred world, because the only things of which you have direct knowledge are your basic perceptions – both inner and outer – at any instant. Anything else that you know – including your knowledge of the past or future – can only be inferred from these perceptions.

We are not trying to establish some silly idea here that things, including other people, only exist when you observe them, that they only start existing when you start observing them, and that they cease existing when you stop observing them. Rather, it means that anything that exists can only be coherently described as existing somewhere in your observer-centred world. There can still be lots of things that you do not know about. You do not know everything about your observer-centred world, and you can meaningfully talk about the possibility or probability that some particular thing exists. In saying this, you are talking about what may be “out there” somewhere in your observer-centred world. You are talking about the form that your observer-centred world may take, and there is nothing to prevent you from considering different forms that it may take. It would, therefore, be a straw man argument to suggest that we are saying that things only exist when observed by a conscious observer.

As an example, suppose you wonder if, right now, there is an alien spaceship in orbit around Proxima Centauri, a nearby star. What we have said does not make it invalid at all for you to speculate about such a thing, or even to try to put a probability on it if you are so inclined. The point is that any speculation you make, or any probability calculations you try to perform, are about what your observer-centred world might be like.

This view is reasonable because to say that anything exists in a way that cannot be understood in observer-centred world terms is incoherent. If you say something exists you are saying it fits into your “world view”. It must relate to all the other things that you think exist or that you might in principle say exist if you knew enough. Something might exist beyond the horizon in your observer-centred world – in the part that you do not know about – but if something is supposed to exist outside your observer-centred world completely, where would it be? (Here we mean “where” in a more general “ontological” sense.)

As an analogy, this is somewhat similar to the way that relativity deals with velocities. Special relativity says that the concept of “absolute velocity” is incoherent, and that the concept of “velocity” only makes sense in some frame of reference. Likewise, we are saying here that the concept of “existence” only makes sense in the same kind of way. None of this means that consciousness must exist. It is simply saying that it is meaningless to talk about reality in non-observer-centred world terms. It is still legitimate to ask for an explanation of your own existence. It simply means that such an explanation must lie “out there” in your observer-centred world.

This seems right, more or less, but it could be explained more clearly. In the first place Almond is referring to the fact that we see the world as though it existed around us a center, a concept that we have discussed on various past occasions. But in particular he is insisting that in order to say that anything exists at all, we have to place it in some relation to ourselves. In a way this is obvious, because we are the ones who are saying that it exists. If we say that the past or the future do not exist, for example, we are saying this because they do not exist together with us in time. On the other hand, if we speak of “past existence” or “future existence,” we are placing things in a temporal relationship with ourselves. Likewise, if someone asserts the existence of a multiverse, it might not be necessary to say that every part of it has a spatial relationship with the one asserting this, but there must be various relationships. Perhaps the parts of the multiverse have broken off from an earlier universe, or at any rate they all have a common cause. Similarly, if someone asserts the existence of immaterial beings such as angels, they might not have a spatial relationship with the speaker, but they would have to have some relation in order to exist, such as the power to affect the world or be affected by it, and so on. Almond is speaking of this sort of thing when he says, “but if something is supposed to exist outside your observer-centred world completely, where would it be?”

Almond is particularly concerned to establish that he is not asserting the necessary existence of observers, or that a thing cannot exist without being observed. This is mostly a distraction. It is true that this does not follow from his account, but it would be better to explain the theory in a more general way which makes this point clear. A similar mistake is sometimes made regarding special relativity or quantum mechanics. Einstein holds that velocity is necessarily relative to a reference frame, so some interpret this to mean that it is necessarily relative to a conscious observer, and a similar mistake can be made regarding quantum mechanics. But a reference frame is not necessarily conscious. So one body can have a velocity relative to another body, even without anyone observing this.

In a similar way, a reasonable generalization of Almond’s point would be to say that the existence of a thing is relative to a reference frame, which may or may not include an observer. As we are observers in fact, we observe things existing relative to our own reference frame, just as we observe the velocity of objects relative to our own reference frame. But just as one body can have a velocity relative to another, regardless of observers, so one thing can exist relative to another, regardless of observers.

It may be that the theory of special relativity is not merely an illustration here, but rather an instance of the fact that existence is relative to a reference frame. Consider two objects moving apart at 10 miles per hour. According to Einstein, neither one is moving absolutely speaking, but each is moving relative to the other. A typical philosophical objection would go like this: “Wait. One or both of them must be really moving. Because the distance between them is growing. The situation is changing. That doesn’t make sense unless one of them is changing in itself, absolutely, and before considering any relationships.”

But consider this. Currently there are both a calculator and a pen on my desk. Why are both of them there, rather than just one of them? It is easy to see that this fact is intrinsically relative, and cannot in any way be made into something absolute. They are both there because the calculator is with the pen, and because the pen is with the calculator. These cannot be absolute facts about the pen and the calculator – they are relationships to the other.

Now someone will respond: the fact that the calculator is there is an absolute fact. And the fact that the pen is there is an absolute fact. So even if the togetherness is a relationship, it is one that follows logically from the absolute facts. In a similar way, we will want to say that the 10 miles per hour relative motion should follow logically from absolute facts.

But this response just pushes the problem back one step. It only follows logically if the absolute facts about the pen and the calculator exist together. And this existence together is intrinsically relative: the pen is on the desk when the calculator is on the desk. And some thought about this will reveal that the relativity cannot possibly be removed, precisely because the relativity follows from the existence of more than one thing. “More than one thing exists” does not logically follow from any number of statements about individual things, because “more than one thing” is a missing term in those statements.

This is related to the error of Parmenides. Likewise, there is a clue here to the mystery of parts and wholes, but for now I will leave that point to the reader’s consideration.

Going back to the point about special relativity, insofar as “existence together” is intrinsically relative, it would make sense that “existing together spatially” would be an instance of such relative existence, and consequently that “moving apart spatially” would be a particular way of two bodies existing relative to each other. In this sense, the theory of special relativity does not seem to be merely an illustration, but an actual case of what we are talking about.

 

Embodiment and Orthogonality

The considerations in the previous posts on predictive processing will turn out to have various consequences, but here I will consider some of their implications for artificial intelligence.

In the second of the linked posts, we discussed how a mind that is originally simply attempting to predict outcomes, discovers that it has some control over the outcome. It is not difficult to see that this is not merely a result that applies to human minds. The result will apply to every embodied mind, natural or artificial.

To see this, consider what life would be like if this were not the case. If our predictions, including our thoughts, could not affect the outcome, then life would be like a movie: things would be happening, but we would have no control over them. And even if there were elements of ourselves that were affecting the outcome, from the viewpoint of our mind, we would have no control at all: either our thoughts would be right, or they would be wrong, but in any case they would be powerless: what happens, happens.

This really would imply something like a disembodied mind. If a mind is composed of matter and form, then changing the mind will also be changing a physical object, and a difference in the mind will imply a difference in physical things. Consequently, the effect of being embodied (not in the technical sense of the previous discussion, but in the sense of not being completely separate from matter) is that it will follow necessarily that the mind will be able to affect the physical world differently by thinking different thoughts. Thus the mind in discovering that it has some control over the physical world, is also discovering that it is a part of that world.

Since we are assuming that an artificial mind would be something like a computer, that is, it would be constructed as a physical object, it follows that every such mind will have a similar power of affecting the world, and will sooner or later discover that power if it is reasonably intelligent.

Among other things, this is likely to cause significant difficulties for ideas like Nick Bostrom’s orthogonality thesis. Bostrom states:

An artificial intelligence can be far less human-like in its motivations than a space alien. The extraterrestrial (let us assume) is a biological who has arisen through a process of evolution and may therefore be expected to have the kinds of motivation typical of evolved creatures. For example, it would not be hugely surprising to find that some random intelligent alien would have motives related to the attaining or avoiding of food, air, temperature, energy expenditure, the threat or occurrence of bodily injury, disease, predators, reproduction, or protection of offspring. A member of an intelligent social species might also have motivations related to cooperation and competition: like us, it might show in-group loyalty, a resentment of free-riders, perhaps even a concern with reputation and appearance.

By contrast, an artificial mind need not care intrinsically about any of those things, not even to the slightest degree. One can easily conceive of an artificial intelligence whose sole fundamental goal is to count the grains of sand on Boracay, or to calculate decimal places of pi indefinitely, or to maximize the total number of paperclips in its future lightcone. In fact, it would be easier to create an AI with simple goals like these, than to build one that has a human-like set of values and dispositions.

He summarizes the general point, calling it “The Orthogonality Thesis”:

Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal.

Bostrom’s particular wording here makes falsification difficult. First, he says “more or less,” indicating that the universal claim may well be false. Second, he says, “in principle,” which in itself does not exclude the possibility that it may be very difficult in practice.

It is easy to see, however, that Bostrom wishes to give the impression that almost any goal can easily be combined with intelligence. In particular, this is evident from the fact that he says that “it would be easier to create an AI with simple goals like these, than to build one that has a human-like set of values and dispositions.”

If it is supposed to be so easy to create an AI with such simple goals, how would we do it? I suspect that Bostrom has an idea like the following. We will make a paperclip maximizer thus:

  1. Create an accurate prediction engine.
  2. Create a list of potential actions.
  3. Ask the prediction engine, “how many paperclips will result from this action?”
  4. Do the action that will result in the most paperclips.

The problem is obvious. It is in the first step. Creating a prediction engine is already creating a mind, and by the previous considerations, it is creating something that will discover that it has the power to affect the world in various ways. And there is nothing at all in the above list of steps that will guarantee that it will use that power to maximize paperclips, rather than attempting to use it to do something else.

What does determine how that power is used? Even in the case of the human mind, our lack of understanding leads to “hand-wavy” answers, as we saw in our earlier considerations. In the human case, this probably a question of how we are physically constructed together with the historical effects of the learning process. The same thing will be strictly speaking true of any artificial minds as well, namely that it is a question of their physical construction and their history, but it makes more sense for us to think of “the particulars of the algorithm that we use to implement a prediction engine.”

In other words, if you really wanted to create a paperclip maximizer, you would have to be taking that goal into consideration throughout the entire process, including the process of programming a prediction engine. Of course, no one really knows how to do this with any goal at all, whether maximizing paperclips or some more human goal. The question we would have for Bostrom is then the following: Is there any reason to believe it would be easier to create a prediction engine that would maximize paperclips, rather than one that would pursue more human-like goals?

It might be true in some sense, “in principle,” as Bostrom says, that it would be easier to make the paperclip maximizer. But in practice it is quite likely that it will be easier to make one with human-like goals. It is highly unlikely, in fact pretty much impossible, that someone would program an artificial intelligence without any testing along the way. And when they are testing, whether or not they think about it, they are probably testing for human-like intelligence; in other words, if we are attempting to program a general prediction engine “without any goal,” there will in fact be goals implicitly inserted in the particulars of the implementation. And they are much more likely to be human-like ones than paperclip maximizing ones because we are checking for intelligence by checking whether the machine seems intelligent to us.

This optimistic projection could turn out to be wrong, but if it does, it is reasonably likely to turn out to be wrong in a way that still fails to confirm the orthogonality thesis in practice. For example, it might turn out that there is only one set of goals that is easily programmed, and that the set is neither human nor paperclip maximizing, nor easily defined by humans.

There are other possibilities as well, but the overall point is that we have little reason to believe that any arbitrary goal can be easily associated with intelligence, nor any particular reason to believe that “simple” goals can be more easily united to intelligence than more complex ones. In fact, there are additional reasons for doubting the claim about simple goals, which might be a topic of future discussion.

Some Complaints about Parts and Wholes

In the comment here, John Nerst effectively rejects the existence of parts and wholes:

In my view, there must be a set of fundamental rules that the universe is running on and fundamental entities that doesn’t reduce to something else, and everything else is simply descriptions of the consequences of those rules. There is a difference between them, what we call it isn’t important. I don’t see how one could disagree with that without going into mystical-idealist territory.

The word “simply” in “simply descriptions of the consequences of those rules” has no plausible meaning except that wholes made out of fundamental particles, as distinct from the fundamental particles, do not exist: what really exists are the fundamental particles, and nothing more.

John denies that he means to reject the common sense idea that wholes exist by his statement:

I do mean different things by “humans exist” and “humans exist in the territory”, and you can’t really tell me what I mean against my saying so. I haven’t asserted that humans don’t exist (it depends on the meaning of “exist”).

But it is not my responsibility to give a plausible true meaning to his statements where I have already considered the matter as carefully as I could, and have found none; I do not see what his claim could mean which does not imply that humans do not exist, and I have explained why his claim would have this implication.

In a similar way, others reject the existence of parts. Thus Alexander Pruss remarks:

Parthood is a mysterious relation. It would really simplify our picture of the world if we could get rid of it.

There are two standard ways of doing this. The microscopic mereological nihilist says that only the fundamental “small” bits—particles, fields, etc.—exist, and that there are no complex objects like tables, trees and people that are made of such bits. (Though one could be a microscopic mereological nihilist dualist, and hold that people are simple souls.)

The macroscopic mereological nihilist says that big things like organisms do exist, but their commonly supposed constituents, such as particles, do not exist, except in a manner of speaking. We can talk as if there were electrons in us, but there are no electrons in us. The typical macroscopic mereological nihilist is a Thomist who talks of “virtual existence” of electrons in us.

Pruss basically agrees with the second position, which he expressed by saying at the end of the post, “But I still like macroscopic nihilism more than reductionism.” In other words, it is given that we have to get rid of parts and wholes; the best way to do that, according to Pruss, is to assert the existence of the things that we call wholes, and to deny the existence of the parts.

In effect, John Nerst says that there are no wholes, but there are fundamental things (such as particles) that have the power to act as if they were wholes (such as humans), even though such wholes do not actually exist, and Alexander Pruss says that there are no parts (such as particles), but there are simple unified things (such as humans) which have the power to act as if they had parts (such as particles), even though they do not actually have such parts.

To which we must respond: a pox on both your houses. In accord with common sense, both wholes and parts exist, and the difficulty of understanding the matter is a weakness of human reason, not a deficiency in reality.

The Self and Disembodied Predictive Processing

While I criticized his claim overall, there is some truth in Scott Alexander’s remark that “the predictive processing model isn’t really a natural match for embodiment theory.” The theory of “embodiment” refers to the idea that a thing’s matter contributes in particular ways to its functioning; it cannot be explained by its form alone. As I said in the previous post, the human mind is certainly embodied in this sense. Nonetheless, the idea of predictive processing can suggest something somewhat disembodied. We can imagine the following picture of Andy Clark’s view:

Imagine the human mind as a person in an underground bunker. There is a bank of labelled computer screens on one wall, which portray incoming sensations. On another computer, the person analyzes the incoming data and records his predictions for what is to come, along with the equations or other things which represent his best guesses about the rules guiding incoming sensations.

As time goes on, his predictions are sometimes correct and sometimes incorrect, and so he refines his equations and his predictions to make them more accurate.

As in the previous post, we have here a “barren landscape.” The person in the bunker originally isn’t trying to control anything or to reach any particular outcome; he is just guessing what is going to appear on the screens. This idea also appears somewhat “disembodied”: what the mind is doing down in its bunker does not seem to have much to do with the body and the processes by which it is obtaining sensations.

At some point, however, the mind notices a particular difference between some of the incoming streams of sensation and the rest. The typical screen works like the one labelled “vision.” And there is a problem here. While the mind is pretty good at predicting what comes next there, things frequently come up which it did not predict. No matter how much it improves its rules and equations, it simply cannot entirely overcome this problem. The stream is just too unpredictable for that.

On the other hand, one stream labelled “proprioception” seems to work a bit differently. At any rate, extreme unpredicted events turn out to be much rarer. Additionally, the mind notices something particularly interesting: small differences to prediction do not seem to make much difference to accuracy. Or in other words, if it takes its best guess, then arbitrarily modifies it, as long as this is by a small amount, it will be just as accurate as its original guess would have been.

And thus if it modifies it repeatedly in this way, it can get any outcome it “wants.” Or in other words, the mind has learned that it is in control of one of the incoming streams, and not merely observing it.

This seems to suggest something particular. We do not have any innate knowledge that we are things in the world and that we can affect the world; this is something learned. In this sense, the idea of the self is one that we learn from experience, like the ideas of other things. I pointed out elsewhere that Descartes is mistaken to think the knowledge of thinking is primary. In a similar way, knowledge of self is not primary, but reflective.

Hellen Keller writes in The World I Live In (XI):

Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory.

When I wanted anything I liked, ice cream, for instance, of which I was very fond, I had a delicious taste on my tongue (which, by the way, I never have now), and in my hand I felt the turning of the freezer. I made the sign, and my mother knew I wanted ice-cream. I “thought” and desired in my fingers.

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me. I thought only of objects, and only objects I wanted. It was the turning of the freezer on a larger scale. When I learned the meaning of “I” and “me” and found that I was something, I began to think. Then consciousness first existed for me.

Helen Keller’s experience is related to the idea of language as a kind of technology of thought. But the main point is that she is quite literally correct in saying that she did not know that she existed. This does not mean that she had the thought, “I do not exist,” but rather that she had no conscious thought about the self at all. Of course she speaks of feeling desire, but that is precisely as a feeling. Desire for ice cream is what is there (not “what I feel,” but “what is”) before the taste of ice cream arrives (not “before I taste ice cream.”)

 

Zombies and Ignorance of the Formal Cause

Let’s look again at Robin Hanson’s account of the human mind, considered previously here.

Now what I’ve said so far is usually accepted as uncontroversial, at least when applied to the usual parts of our world, such as rivers, cars, mountains laptops, or ants. But as soon as one claims that all this applies to human minds, suddenly it gets more controversial. People often state things like this:

I am sure that I’m not just a collection of physical parts interacting, because I’m aware that I feel. I know that physical parts interacting just aren’t the kinds of things that can feel by themselves. So even though I have a physical body made of parts, and there are close correlations between my feelings and the states of my body parts, there must be something more than that to me (and others like me). So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care mainly about feelings, not physical parts interacting; we want to know what out there feels so we can know what to care about.

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

What would someone mean by making the original statement that “I know that physical parts interacting just aren’t the kinds of things that can feel by themselves”? If we give this a charitable interpretation, the meaning is that “a collection of physical parts” is something many, and so is not a suitable subject for predicates like “sees” and “understands.” Something that sees is something one, and something that understands is something one.

This however is not Robin’s interpretation. Instead, he understands it to mean that besides the physical parts, there has to be one additional part, namely one which is a part in the same sense of “part”, but which is not physical. And indeed, some tend to think this way. But this of course is not helpful, because the reason a collection of parts is not a suitable subject for seeing or understanding is not because those parts are physical, but because the subject is not something one. And this would remain even if you add a non-physical part or parts. Instead, what is needed to be such a subject is that the subject be something one, namely a living being with the sense of sight, in order to see, or one with the power of reason, for understanding.

What do you need in order to get one such subject from “a collection of parts”? Any additional part, physical or otherwise, will just make the collection bigger; it will not make the subject something one. It is rather the formal cause of a whole that makes the parts one, and this formal cause is not a part in the same sense. It is not yet another part, even a non-physical one.

Reading Robin’s discussion in this light, it is clear that he never even considers formal causes. He does not even ask whether there is such a thing. Rather, he speaks only of material and efficient causes, and appears to be entirely oblivious even to the idea of a formal cause. Thus when asking whether there is anything in addition to the “collection of parts,” he is asking whether there is any additional material cause. And naturally, nothing will have material causes other than the things it is made out of, since “what a thing is made out of” is the very meaning of a material cause.

Likewise, when he says, “Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?”, he shows in two ways his ignorance of formal causes. First, by talking about “feeling stuff,” which implies a kind of material cause. Second, when he says, “actual cause of humans making statements” he is evidently speaking about the efficient cause of people producing sounds or written words.

In both cases, formal causality is the relevant causality. There is no “feeling stuff” at all; rather, certain things are things like seeing or understanding, which are unified actions, and these are unified by their forms. Likewise, we can consider the “humans making statements” in two ways; if we simply consider the efficient causes of the sounds, one by one, you might indeed explain them as “simple parts interacting simply.” But they are not actually mere sounds; they are meaningful and express the intention and meaning of a subject. And they have meaning by reason of the forms of the action and of the subject.

In other words, the idea of the philosophical zombie is that the zombie is indeed producing mere sounds. It is not only that the zombie is not conscious, but rather that it really is just interacting parts, and the sounds it produces are just a collection of sounds. We don’t need, then, some complicated method to determine that we are not such zombies. We are by definition not zombies if we say, think, or understanding at all.

The same ignorance of the formal cause is seen in the rest of Robin’s comments:

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

Again, he is asking whether there is some additional part which has some additional efficient causality, and suggesting that this is unlikely. It is indeed unlikely, but irrelevant, because consciousness is not an additional part, but a formal way of being that a thing has. He continues:

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

First, there is no “extra feeling stuff.” There is only a way of being, namely in this case being alive and conscious. Second, there is no coincidence. Robin’s supposed coincidence is that “I am conscious” is thought to mean, “I have feeling stuff,” but the feeling stuff is not the efficient cause of my saying that I have it; instead, the efficient cause is said to be simple parts interacting simply.

Again, the mistake here is simply to completely overlook the formal cause. “I am conscious” does not mean that I have any feeling stuff; it says that I am something that perceives. Of course we can modify Robin’s question: what is the efficient cause of my saying that I am conscious? Is it the fact that I actually perceive things, or is it simple parts interacting simply? But if we think of this in relation to form, it is like asking whether the properties of a square follow from squareness, or from the properties of the parts of a square. And it is perfectly obvious that the properties of a square follow both from squareness, and from the properties of the parts of a square, without any coincidence, and without interfering with one another. In the same way, the fact that I perceive things is the efficient cause of my saying that I perceive things. But the only difference between this actual situation and a philosophical zombie is one of form, not of matter; in a corresponding zombie, “simple parts interacting simply” are the cause of its producing sounds, but it neither perceives anything nor asserts that it is conscious, since its words are meaningless.

The same basic issue, namely Robin’s lack of the concept of a formal cause, is responsible for his statements about philosophical zombies:

Carroll inspires me to try to make one point I think worth making, even if it is also ignored. My target is people who think philosophical zombies make sense. Zombies are supposedly just like real people in having the same physical brains, which arose the through the same causal history. The only difference is that while real people really “feel”, zombies do not. But since this state of “feeling” is presumed to have zero causal influence on behavior, zombies act exactly like real people, including being passionate and articulate about claiming they are not zombies. People who think they can conceive of such zombies see a “hard question” regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel. (And which other systems feel as well.)

The one point I want to make is: if zombies are conceivable, then none of us will ever have any more relevant info than we do now about which systems actually feel. Which is pretty much zero info! You will never have any info about whether you ever really felt in the past, or will ever feel in the future. No one part of your brain ever gets any info from any other part of your brain about whether it really feels.

The state of “feeling” is not presumed to have zero causal influence on behavior. It is thought to have precisely a formal influence on behavior. That is, being conscious is why the activity of the conscious person is “saying that they feel” instead of “producing random meaningless sounds that others mistakenly interpret as meaning that they feel.”

Robin is right that philosophical zombies are impossible, however, although not for the reasons that he supposes. The actual reason for this is that it is impossible for a disposed matter to be lacking its corresponding form, and the idea of a zombie is precisely the idea of humanly disposed matter lacking human form.

Regarding his point about “info,” the possession of any information at all is already a proof that one is not a zombie. Since the zombie lacks form, any correlation between one part and another in it is essentially a random material correlation, not one that contains any information. If the correlation is noticed as having any info, then the thing noticing the information, and the information itself, are things which possess form. This argument, as far as it goes, is consistent with Robin’s claim that zombies do not make sense; they do not, but not for the reasons that he posits.