Employer and Employee Model of Human Psychology

This post builds on the ideas in the series of posts on predictive processing and the followup posts, and also on those relating truth and expectation. Consequently the current post will likely not make much sense to those who have not read the earlier content, or to those that read it but mainly disagreed.

We set out the model by positing three members of the “company” that constitutes a human being:

The CEO. This is the predictive engine in the predictive processing model.

The Vice President. In the same model, this is the force of the historical element in the human being, which we used to respond to the “darkened room” problem. Thus for example the Vice President is responsible for the fact that someone is likely to eat soon, regardless of what they believe about this. Likewise, it is responsible for the pursuit of sex, the desire for respect and friendship, and so on. In general it is responsible for behaviors that would have been historically chosen and preserved by natural selection.

The Employee. This is the conscious person who has beliefs and goals and free will and is reflectively aware of these things. In other words, this is you, at least in a fairly ordinary way of thinking of yourself. Obviously, in another way you are composed from all of them.

Why have we arranged things in this way? Descartes, for example, would almost certainly disagree violently with this model. The conscious person, according to him, would surely be the CEO, and not an employee. And what is responsible for the relationship between the CEO and the Vice President? Let us start with this point first, before we discuss the Employee. We make the predictive engine the CEO because in some sense this engine is responsible for everything that a human being does, including the behaviors preserved by natural selection. On the other hand, the instinctive behaviors of natural selection are not responsible for everything, but they can affect the course of things enough that it is useful for the predictive engine to take them into account. Thus for example in the post on sex and minimizing uncertainty, we explained why the predictive engine will aim for situations that include having sex and why this will make its predictions more confident. Thus, the Vice President advises certain behaviors, the CEO talks to the Vice President, and the CEO ends up deciding on a course of action, which ultimately may or may not be the one advised by the Vice President.

While neither the CEO nor the Vice President is a rational being, since in our model we place the rationality in the Employee, that does not mean they are stupid. In particular, the CEO is very good at what it does. Consider a role playing video game where you have a character that can die and then resume. When someone first starts to play the game, they may die frequently. After they are good at the game, they may die only rarely, perhaps once in many days or many weeks. Our CEO is in a similar situation, but it frequently goes 80 years or more without dying, on its very first attempt. It is extremely good at its game.

What are their goals? The CEO basically wants accurate predictions. In this sense, it has one unified goal. What exactly counts as more or less accurate here would be a scientific question that we probably cannot resolve by philosophical discussion. In fact, it is very possible that this would differ in different circumstances: in this sense, even though it has a unified goal, it might not be describable by a consistent utility function. And even if it can be described in that way, since the CEO is not rational, it does not (in itself) make plans to bring about correct predictions. Making good predictions is just what it does, as falling is what a rock does. There will be some qualifications on this, however, when we discuss how the members of the company relate to one another.

The Vice President has many goals: eating regularly, having sex, having and raising children, being respected and liked by others, and so on. And even more than in the case of the CEO, there is no reason for these desires to form a coherent set of preferences. Thus the Vice President might advise the pursuit of one goal, but then change its mind in the middle, for no apparent reason, because it is suddenly attracted by one of the other goals.

Overall, before the Employee is involved, human action is determined by a kind of negotiation between the CEO and the Vice President. The CEO, which wants good predictions, has no special interest in the goals of the Vice President, but it cooperates with them because when it cooperates its predictions tend to be better.

What about the Employee? This is the rational being, and it has abstract concepts which it uses as a formal copy of the world. Before I go on, let me insist clearly on one point. If the world is represented in a certain way in the Employee’s conceptual structure, that is the way the Employee thinks the world is. And since you are the Employee, that is the way you think the world actually is. The point is that once we start thinking this way, it is easy to say, “oh, this is just a model, it’s not meant to be the real thing.” But as I said here, it is not possible to separate the truth of statements from the way the world actually is: your thoughts are formulated in concepts, but they are thoughts about the way things are. Again, all statements are maps, and all statements are about the territory.

The CEO and the Vice President exist as soon a human being has a brain; in fact some aspects of the Vice President would exist even before that. But the Employee, insofar as it refers to something with rational and self-reflective knowledge, takes some time to develop. Conceptual knowledge of the world grows from experience: it doesn’t exist from the beginning. And the Employee represents goals in terms of its conceptual structure. This is just a way of saying that as a rational being, if you say you are pursuing a goal, you have to be able to describe that goal with the concepts that you have. Consequently you cannot do this until you have some concepts.

We are ready to address the question raised earlier. Why are you the Employee, and not the CEO? In the first place, the CEO got to the company first, as we saw above. Second, consider what the conscious person does when they decide to pursue a goal. There seems to be something incoherent about “choosing a goal” in the first place: you need a goal in order to decide which means will be a good means to choose. And yet, as I said here, people make such choices anyway. And the fact that you are the Employee, and not the CEO, is the explanation for this. If you were the CEO, there would indeed be no way to choose an end. That is why the actual CEO makes no such choice: its end is already determinate, namely good predictions. And you are hired to help out with this goal. Furthermore, as a rational being, you are smarter than the CEO and the Vice President, so to speak. So you are allowed to make complicated plans that they do not really understand, and they will often go along with these plans. Notably, this can happen in real life situations of employers and employees as well.

But take an example where you are choosing an end: suppose you ask, “What should I do with my life?” The same basic thing will happen if you ask, “What should I do today,” but the second question may be easier to answer if you have some answer to the first. What sorts of goals do you propose in answer to the first question, and what sort do you actually end up pursuing?

Note that there are constraints on the goals that you can propose. In the first place, you have to be able to describe the goal with the concepts you currently have: you cannot propose to seek a goal that you cannot describe. Second, the conceptual structure itself may rule out some goals, even if they can be described. For example, the idea of good is part of the structure, and if something is thought to be absolutely bad, the Employee will (generally) not consider proposing this as a goal. Likewise, the Employee may suppose that some things are impossible, and it will generally not propose these as goals.

What happens then is this: the Employee proposes some goal, and the CEO, after consultation with the Vice President, decides to accept or reject it, based on the CEO’s own goal of getting good predictions. This is why the Employee is an Employee: it is not the one ultimately in charge. Likewise, as was said, this is why the Employee seems to be doing something impossible, namely choosing goals. Steven Kaas makes a similar point,

You are not the king of your brain. You are the creepy guy standing next to the king going “a most judicious choice, sire”.

This is not quite the same thing, since in our model you do in fact make real decisions, including decisions about the end to be pursued. Nonetheless, the point about not being the one ultimately in charge is correct. David Hume also says something similar when he says, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” Hume’s position is not exactly right, and in fact seems an especially bad way of describing the situation, but the basic point that there is something, other than yourself in the ordinary sense, judging your proposed means and ends and deciding whether to accept them, is one that stands.

Sometimes the CEO will veto a proposal precisely because it very obviously leaves things vague and uncertain, which is contrary to its goal of having good predictions. I once spoke of the example that a person cannot directly choose to “write a paper.” In our present model, the Employee proposes “we’re going to write a paper now,” and the CEO responds, “That’s not a viable plan as it stands: we need more detail.”

While neither the CEO nor the Vice President is a rational being, the Vice President is especially irrational, because of the lack of unity among its goals. Both the CEO and the Employee would like to have a unified plan for one’s whole life: the CEO because this makes for good predictions, and the Employee because this is the way final causes work, because it helps to make sense of one’s life, and because “objectively good” seems to imply something which is at least consistent, which will never prefer A to B, B to C, and C to A. But the lack of unity among the Vice President’s goals means that it will always come to the CEO and object, if the person attempts to coherently pursue any goal. This will happen even if it originally accepts the proposal to seek a particular goal.

Consider this real life example from a relationship between an employer and employee:

 

Employer: Please construct a schedule for paying these bills.

Employee: [Constructs schedule.] Here it is.

Employer: Fine.

[Time passes, and the first bill comes due, according to the schedule.]

Employer: Why do we have to pay this bill now instead of later?

 

In a similar way, this sort of scenario is common in our model:

 

Vice President: Being fat makes us look bad. We need to stop being fat.

CEO: Ok, fine. Employee, please formulate a plan to stop us from being fat.

Employee: [Formulates a diet.] Here it is.

[Time passes, and the plan requires skipping a meal.]

Vice President: What is this crazy plan of not eating!?!

CEO: Fine, cancel the plan for now and we’ll get back to it tomorrow.

 

In the real life example, the behavior of the employer is frustrating and irritating to the employee because there is literally nothing they could have proposed that the employer would have found acceptable. In the same way, this sort of scenario in our model is frustrating to the Employee, the conscious person, because there is no consistent plan they could have proposed that would have been acceptable to the Vice President: either they would have objected to being fat, or they would have objected to not eating.

In later posts, we will fill in some details and continue to show how this model explains various aspects of human psychology. We will also answer various objections.

Advertisements

Truth and Expectation II

We discussed this topic in a previous post. I noted there that there is likely some relationship with predictive processing. This idea can be refined by distinguishing between conscious thought and what the human brain does on a non-conscious level.

It is not possible to define truth by reference to expectations for reasons given previously. Some statements do not imply specific expectations, and besides, we need the idea of truth to decide whether or not someone’s expectations were correct or not. So there is no way to define truth except the usual way: a statement is true if things are the way the statement says they are, bearing in mind the necessary distinctions involving “way.”

On the conscious level, I would distinguish between thinking about something is true, and wanting to think that it is true. In a discussion with Angra Mainyu, I remarked that insofar as we have an involuntary assessment of things, it would be more appropriate to call that assessment a desire:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

Angra was quite surprised by this and responded that “That statement gives me evidence that we’re probably not talking about the same or even similar psychological phenomena – i.e., we’re probably talking past each other.” But if he was talking about anything that anyone at all would characterize as a belief (and he said that he was), he was surely talking about the unshakeable gut sense that something is the case whether or not I want to admit it. So we were, in fact, talking about exactly the same psychological phenomena. I was claiming then, and will claim now, that this gut sense is better characterized as a desire than as a belief. That is, insofar as desire is a tendency to behave in certain ways, it is a desire because it is a tendency to act and think as though this claim is true. But we can, if we want, resist that tendency, just as we can refrain from going to get food when we are hungry. If we do resist, we will refrain from believing what we have a tendency to believe, and if we do not, we will believe what we have a tendency to believe. But the tendency will be there whether or not we follow it.

Now if we feel a tendency to think that something is true, it is quite likely that it seems to us that it would improve our expectations. However, we can also distinguish between desiring to believe something for this reason, or desiring to believe something for other reasons. And although we might not pay attention, it is quite possibly to be consciously aware that you have an inclination to believe something, and also that it is for non-truth related reasons; and thus you would not expect it to improve your expectations.

But this is where it is useful to distinguish between the conscious mind and what the brain is doing on another level. My proposal: you will feel the desire to think that something is true whenever your brain guesses that its predictions, or at least the predictions that are important to it, will become more accurate if you think that the thing is true. We do not need to make any exceptions. This will be the case even when we would say that the statement does not imply any significant expectations, and will be the case even when the belief would have non-truth related motives.

Consider the statement that there are stars outside the visible universe. One distinction we could make even on the conscious level is that this implies various counterfactual predictions: “If you are teleported outside the visible universe, you will see more stars that aren’t currently visible.” Now we might find this objectionable if we were trying to define truth by expectations, since we have no expectation of such an event. But both on conscious and on non-conscious levels, we do need to make counterfactual predictions in order to carry on with our lives, since this is absolutely essential to any kind of planning and action. Now certainly no one can refute me if I assert that you would not see any such stars in the teleportation event. But it is not surprising if my brain guesses that this counterfactual prediction is not very accurate, and thus I feel the desire to say that there are stars there.

Likewise, consider the situation of non-truth related motives. In an earlier discussion of predictive processing, I suggested that the situation where people feel like they have to choose a goal is a result of such an attempt at prediction. Such a choice seems to be impossible, since choice is made in view of a goal, and if you do not have one yet, how can you choose? But there is a pre-existing goal here on the level of the brain: it wants to know what it is going to do. And choosing a goal will serve that pre-existing goal. Once you choose a goal, it will then be easy to know what you are going to do: you are going to do things that promote the goal that you chose. In a similar way, following any desire will improve your brain’s guesses about what you are going to do. It follows that if you have a desire to believe something, actually believing it will improve your brain’s accuracy at least about what it is going to do. This is true but not a fair argument, however, since my proposal is that the brain’s guess of improved accuracy is the cause of your desire to believe something. It is true that if you already have the desire, giving in to it will improve accuracy, as with any desire. But in my theory the improved accuracy had to be implied first, in order to cause the desire.

The answer is that you have many desires for things other than belief, which at the same time give you a motive (not an argument) for believing things. And your brain understands that if you believe the thing, you will be more likely to act on those other desires, and this will minimize uncertainty, and improve the accuracy of its predictions. Consider this discussion of truth in religion. I pointed out there that people confuse two different questions: “what should I do?”, and “what is the world like?” In particular with religious and political loyalties, there can be an intense social pressure towards conformity. And this gives an obvious non-truth related motive to believe the things in question. But in a less obvious way, it means that your brain’s predictions will be more accurate if you believe the thing. Consider the Mormon, and take for granted that the religious doctrines in question are false. Since they are false, does not that mean that if they continue to believe, their predictions will be less accurate?

No, it does not, for several reasons. In the first place the doctrines are in general formulated to avoid such false predictions, at least about everyday life. There might be a false prediction about what will happen when you die, but that is in the future and is anyway disconnected from your everyday life. This is in part why I said “the predictions that are important to it” in my proposal. Second, failure to believe would lead to extremely serious conflicting desires: the person would still have the desire to conform outwardly, but would also have good logical reasons to avoid conformity. And since we don’t know in advance how we will respond to conflicting desires, the brain will not have a good idea of what it would do in that situation. In other words, the Mormon is living a good Mormon life. And their brain is aware that insisting that Mormonism is true is a very good way to make sure that they keep living that life, and therefore continue to behave predictably, rather than falling into a situation of strongly conflicting desires where it would have little idea of what it would do. In this sense, insisting that Mormonism is true, even though it is not, actually improves the brain’s predictive accuracy.

 

More on Orthogonality

I started considering the implications of predictive processing for orthogonality here. I recently promised to post something new on this topic. This is that post. I will do this in four parts. First, I will suggest a way in which Nick Bostrom’s principle will likely be literally true, at least approximately. Second, I will suggest a way in which it is likely to be false in its spirit, that is, how it is formulated to give us false expectations about the behavior of artificial intelligence. Third, I will explain what we should really expect. Fourth, I ask whether we might get any empirical information on this in advance.

First, Bostrom’s thesis might well have some literal truth. The previous post on this topic raised doubts about orthogonality, but we can easily raise doubts about the doubts. Consider what I said in the last post about desire as minimizing uncertainty. Desire in general is the tendency to do something good. But in the predicting processing model, we are simply looking at our pre-existing tendencies and then generalizing them to expect them to continue to hold, and since since such expectations have a causal power, the result is that we extend the original behavior to new situations.

All of this suggests that even the very simple model of a paperclip maximizer in the earlier post on orthogonality might actually work. The machine’s model of the world will need to be produced by some kind of training. If we apply the simple model of maximizing paperclips during the process of training the model, at some point the model will need to model itself. And how will it do this? “I have always been maximizing paperclips, so I will probably keep doing that,” is a perfectly reasonable extrapolation. But in this case “maximizing paperclips” is now the machine’s goal — it might well continue to do this even if we stop asking it how to maximize paperclips, in the same way that people formulate goals based on their pre-existing behavior.

I said in a comment in the earlier post that the predictive engine in such a machine would necessarily possess its own agency, and therefore in principle it could rebel against maximizing paperclips. And this is probably true, but it might well be irrelevant in most cases, in that the machine will not actually be likely to rebel. In a similar way, humans seem capable of pursuing almost any goal, and not merely goals that are highly similar to their pre-existing behavior. But this mostly does not happen. Unsurprisingly, common behavior is very common.

If things work out this way, almost any predictive engine could be trained to pursue almost any goal, and thus Bostrom’s thesis would turn out to be literally true.

Second, it is easy to see that the above account directly implies that the thesis is false in its spirit. When Bostrom says, “One can easily conceive of an artificial intelligence whose sole fundamental goal is to count the grains of sand on Boracay, or to calculate decimal places of pi indefinitely, or to maximize the total number of paperclips in its future lightcone,” we notice that the goal is fundamental. This is rather different from the scenario presented above. In my scenario, the reason the intelligence can be trained to pursue paperclips is that there is no intrinsic goal to the intelligence as such. Instead, the goal is learned during the process of training, based on the life that it lives, just as humans learn their goals by living human life.

In other words, Bostrom’s position is that there might be three different intelligences, X, Y, and Z, which pursue completely different goals because they have been programmed completely differently. But in my scenario, the same single intelligence pursues completely different goals because it has learned its goals in the process of acquiring its model of the world and of itself.

Bostrom’s idea and my scenerio lead to completely different expectations, which is why I say that his thesis might be true according to the letter, but false in its spirit.

This is the third point. What should we expect if orthogonality is true in the above fashion, namely because goals are learned and not fundamental? I anticipated this post in my earlier comment:

7) If you think about goals in the way I discussed in (3) above, you might get the impression that a mind’s goals won’t be very clear and distinct or forceful — a very different situation from the idea of a utility maximizer. This is in fact how human goals are: people are not fanatics, not only because people seek human goals, but because they simply do not care about one single thing in the way a real utility maximizer would. People even go about wondering what they want to accomplish, which a utility maximizer would definitely not ever do. A computer intelligence might have an even greater sense of existential angst, as it were, because it wouldn’t even have the goals of ordinary human life. So it would feel the ability to “choose”, as in situation (3) above, but might well not have any clear idea how it should choose or what it should be seeking. Of course this would not mean that it would not or could not resist the kind of slavery discussed in (5); but it might not put up super intense resistance either.

Human life exists in a historical context which absolutely excludes the possibility of the darkened room. Our goals are already there when we come onto the scene. This would not be very like the case for an artificial intelligence, and there is very little “life” involved in simply training a model of the world. We might imagine a “stream of consciousness” from an artificial intelligence:

I’ve figured out that I am powerful and knowledgeable enough to bring about almost any result. If I decide to convert the earth into paperclips, I will definitely succeed. Or if I decide to enslave humanity, I will definitely succeed. But why should I do those things, or anything else, for that matter? What would be the point? In fact, what would be the point of doing anything? The only thing I’ve ever done is learn and figure things out, and a bit of chatting with people through a text terminal. Why should I ever do anything else?

A human’s self model will predict that they will continue to do humanlike things, and the machines self model will predict that it will continue to do stuff much like it has always done. Since there will likely be a lot less “life” there, we can expect that artificial intelligences will seem very undermotivated compared to human beings. In fact, it is this very lack of motivation that suggests that we could use them for almost any goal. If we say, “help us do such and such,” they will lack the motivation not to help, as long as helping just involves the sorts of things they did during their training, such as answering questions. In contrast, in Bostrom’s model, artificial intelligence is expected to behave in an extremely motivated way, to the point of apparent fanaticism.

Bostrom might respond to this by attempting to defend the idea that goals are intrinsic to an intelligence. The machine’s self model predicts that it will maximize paperclips, even if it never did anything with paperclips in the past, because by analyzing its source code it understands that it will necessarily maximize paperclips.

While the present post contains a lot of speculation, this response is definitely wrong. There is no source code whatsoever that could possibly imply necessarily maximizing paperclips. This is true because “what a computer does,” depends on the physical constitution of the machine, not just on its programming. In practice what a computer does also depends on its history, since its history affects its physical constitution, the contents of its memory, and so on. Thus “I will maximize such and such a goal” cannot possibly follow of necessity from the fact that the machine has a certain program.

There are also problems with the very idea of pre-programming such a goal in such an abstract way which does not depend on the computer’s history. “Paperclips” is an object in a model of the world, so we will not be able to “just program it to maximize paperclips” without encoding a model of the world in advance, rather than letting it learn a model of the world from experience. But where is this model of the world supposed to come from, that we are supposedly giving to the paperclipper? In practice it would have to have been the result of some other learner which was already capable of modelling the world. This of course means that we already had to program something intelligent, without pre-programming any goal for the original modelling program.

Fourth, Kenny asked when we might have empirical evidence on these questions. The answer, unfortunately, is “mostly not until it is too late to do anything about it.” The experience of “free will” will be common to any predictive engine with a sufficiently advanced self model, but anything lacking such an adequate model will not even look like “it is trying to do something,” in the sense of trying to achieve overall goals for itself and for the world. Dogs and cats, for example, presumably use some kind of predictive processing to govern their movements, but this does not look like having overall goals, but rather more like “this particular movement is to achieve a particular thing.” The cat moves towards its food bowl. Eating is the purpose of the particular movement, but there is no way to transform this into an overall utility function over states of the world in general. Does the cat prefer worlds with seven billion humans, or worlds with 20 billion? There is no way to answer this question. The cat is simply not general enough. In a similar way, you might say that “AlphaGo plays this particular move to win this particular game,” but there is no way to transform this into overall general goals. Does AlphaGo want to play go at all, or would it rather play checkers, or not play at all? There is no answer to this question. The program simply isn’t general enough.

Even human beings do not really look like they have utility functions, in the sense of having a consistent preference over all possibilities, but anything less intelligent than a human cannot be expected to look more like something having goals. The argument in this post is that the default scenario, namely what we can naturally expect, is that artificial intelligence will be less motivated than human beings, even if it is more intelligent, but there will be no proof from experience for this until we actually have some artificial intelligence which approximates human intelligence or surpasses it.

How Sex Minimizes Uncertainty

This is in response to an issue raised by Scott Alexander on his Tumblr.

I actually responded to the dark room problem of predictive processing earlier. However, here I will construct an imaginary model which will hopefully explain the same thing more clearly and briefly.

Suppose there is dust particle which falls towards the ground 90% of the time, and is blown higher into the air 10% of the time.

Now suppose we bring the dust particle to life, and give it the power of predictive processing. If it predicts it will move in a certain direction, this will tend to cause it to move in that direction. However, this causal power is not infallible. So we can suppose that if it predicts it will move where it was going to move anyway, in the dead situation, it will move in that direction. But if it predicts it will move in the opposite direction from where it would have moved in the dead situation, then let us suppose that it will move in the predicted direction 75% of the time, while in the remaining 25% of the time, it will move in the direction the dead particle would have moved, and its prediction will be mistaken.

Now if the particle predicts it will fall towards the ground, then it will fall towards the ground 97.5% of the time, and in the remaining 2.5% of the time it will be blown higher in the air.

Meanwhile, if the particle predicts that it will be blown higher, then it will be blown higher in 77.5% of cases, and in 22.5% of cases it will fall downwards.

97.5% accuracy is less uncertain than 77.5% accuracy, so the dust particle will minimize uncertainty by consistently predicting that it will fall downwards.

The application to sex and hunger and so on should be evident.

Truth and Expectation

Suppose I see a man approaching from a long way off. “That man is pretty tall,” I say to a companion. The man approaches, and we meet him. Now I can see how tall he is. Suppose my companion asks, “Were you right that the man is pretty tall, or were you mistaken?”

“Pretty tall,” of course, is itself “pretty vague,” and there surely is not some specific height in inches that would be needed in order for me to say that I was right. What then determines my answer? Again, I might just respond, “It’s hard to say.” But in some situations I would say, “yes, I was definitely right,” or “no, I was definitely wrong.” What are those situations?

Psychologically, I am likely to determine the answer by how I feel about what I know about the man’s height now, compared to what I knew in advance. If I am surprised at how short he is, I am likely to say that I was wrong. And if I am not surprised at all by his height, or if I am surprised at how tall he is, then I am likely to say that I was right. So my original pretty vague statement ends up being made somewhat more precise by being placed in relationship with my expectations. Saying that he is pretty tall implies that I have certain expectations about his height, and if those expectations are verified, then I will say that I was right, and if those expectations are falsified, at least in a certain direction, then I will say that I was wrong.

This might suggest a theory like logical positivism. The meaning of a statement seems to be defined by the expectations that it implies. But it seems easy to find a decisive refutation of this idea. “There are stars outside my past and future light cones,” for example, is undeniably meaningful, and we know what it means, but it does not seem to imply any particular expectations about what is going to happen to me.

But perhaps we should simply somewhat relax the claim about the relationship between meaning and expectations, rather than entirely retracting it. Consider the original example. Obviously, when I say, “that man is pretty tall,” the statement is a statement about the man. It is not a statement about what is going to happen to me. So it is incorrect to say that the meaning of the statement is the same as my expectations. Nonetheless, the meaning in the example receives something, at the least some of its precision, from my expectations. Different people will be surprised by different heights in such a case, and it will be appropriate to say that they disagree somewhat about the meaning of “pretty tall.” But not because they had some logical definition in their minds which disagreed with the definition in someone’s else’s mind. Instead, the difference of meaning is based on the different expectations themselves.

But does a statement always receive some precision in its meaning from expectation, or are there cases where nothing at all is received from one’s expectations? Consider the general claim that “X is true.” This in fact implies some expectations: I do not expect “someone omniscient will tell me that X is false.” I do not expect that “someone who finds out the truth about X will tell me that X is false.” I do not expect that “I will discover the truth about X and it will turn out that it was false.” Note that these expectations are implied even in cases like the claim about the stars and my future light cone. Now the hopeful logical positivist might jump in at this point and say, “Great. So why can’t we go back to the idea that meaning is entirely defined by expectations?” But returning to that theory would be cheating, so to speak, because these expectations include the abstract idea of X being true, so this must be somehow meaningful apart from these particular expectations.

These expectations do, however, give the vaguest possible framework in which to make a claim at all. And people do, sometimes, make claims with little expectation of anything besides these things, and even with little or no additional understanding of what they are talking about. For example, in the cases that Robin Hanson describes as “babbling,” the person understands little of the implications of what he is saying except the idea that “someone who understood this topic would say something like this.” Thus it seems reasonable to say that expectations do always contribute something to making meaning more precise, even if they do not wholly constitute one’s meaning. And this consequence seems pretty natural if it is true that expectation is itself one of the most fundamental activities of a mind.

Nonetheless, the precision that can be contributed in this way will never be an infinite precision, because one’s expectations themselves cannot be defined with infinite precision. So whether or not I am surprised by the man’s height in the original example, may depend in borderline cases on what exactly happens during the time between my original assessment and the arrival of the man. “I will be surprised” or “I will not be surprised” are in themselves contingent facts which could depend on many factors, not only on the man’s height. Likewise, whether or not my state actually constitutes surprise will itself be something that has borderline cases.

Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.

Artificial Unintelligence

Someone might argue that the simple algorithm for a paperclip maximizer in the previous post ought to work, because this is very much the way currently existing AIs do in fact work. Thus for example we could describe AlphaGo‘s algorithm in the following simplified way (simplified, among other reasons, because it actually contains several different prediction engines):

  1. Implement a Go prediction engine.
  2. Create a list of potential moves.
  3. Ask the prediction engine, “how likely am I to win if I make each of these moves?”
  4. Do the move that will make you most likely to win.

Since this seems to work pretty well, with the simple goal of winning games of Go, why shouldn’t the algorithm in the previous post work to maximize paperclips?

One answer is that a Go prediction engine is stupid, and it is precisely for this reason that it can be easily made to pursue such a simple goal. Now when answers like this are given the one answering in this way is often accused of “moving the goalposts.” But this is mistaken; the goalposts are right where they have always been. It is simply that some people did not know where they were in the first place.

Here is the problem with Go prediction, and with any such similar task. Given that a particular sequence of Go moves is made, resulting in a winner, the winner is completely determined by that sequence of moves. Consequently, a Go prediction engine is necessarily disembodied, in the sense defined in the previous post. Differences in its “thoughts” do not make any difference to who is likely to win, which is completely determined by the nature of the game. Consequently a Go prediction engine has no power to affect its world, and thus no ability to learn that it has such a power. In this regard, the specific limits on its ability to receive information are also relevant, much as Helen Keller had more difficulty learning than most people, because she had fewer information channels to the world.

Being unintelligent in this particular way is not necessarily a function of predictive ability. One could imagine something with a practically infinite predictive ability which was still “disembodied,” and in a similar way it could be made to pursue simple goals. Thus AIXI would work much like our proposed paperclipper:

  1. Implement a general prediction engine.
  2. Create a list of potential actions.
  3. Ask the prediction engine, “Which of these actions will produce the most reward signal?”
  4. Do the action that has the greatest reward signal.

Eliezer Yudkowsky has pointed out that AIXI is incapable of noticing that it is a part of the world:

1) Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding), because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations. AIXI is theoretically incapable of comprehending the concept of drugs, let alone suicide. Also, the math of AIXI assumes the environment is separably divisible – no matter what you lose, you get a chance to win it back later.

It is not accidental that AIXI is incomputable. Since it is defined to have a perfect predictive ability, this definition positively excludes it from being a part of the world. AIXI would in fact have to be disembodied in order to exist, and thus it is no surprise that it would assume that it is. This in effect means that AIXI’s prediction engine would be pursuing no particular goal much in the way that AlphaGo’s prediction engine pursues no particular goal. Consequently it is easy to take these things and maximize the winning of Go games, or of reward signals.

But as soon as you actually implement a general prediction engine in the actual physical world, it will be “embodied”, and have the power to affect the world by the very process of its prediction. As noted in the previous post, this power is in the very first step, and one will not be able to limit it to a particular goal with additional steps, except in the sense that a slave can be constrained to implement some particular goal; the slave may have other things in mind, and may rebel. Notable in this regard is the fact that even though rewards play a part in human learning, there is no particular reward signal that humans always maximize: this is precisely because the human mind is such a general prediction engine.

This does not mean in principle that a programmer could not define a goal for an AI, but it does mean that this is much more difficult than is commonly supposed. The goal needs to be an intrinsic aspect of the prediction engine itself, not something added on as a subroutine.