Truth and Expectation III

Consider what I said at the end of the last post on this topic. When our Mormon protagonist insists that his religion is true, he improves the accuracy of his expectations about the world. His expectations would actually be less accurate he decided that his religious beliefs are false.

Now we said in the first post on truth and expectation that in part we seem to determine the meaning of a statement by the expectations that it implies. If this is the case, then why should we not say that the Mormon’s beliefs are definitely true? In fact, in the post linked above about truth in religion, I suggested that people frequently do mean something like this when they say that their own religion is true. Nonetheless, it is easy to see that truth in this particular sense does not imply that each and every claim in the religion, understood as a claim about the world, is true.

But why not, if one’s expectations become more accurate, just as with statements that are clearly true? As I noted in the earlier post, to say that “that man is pretty tall” is a statement about the man. It is not a statement about myself, nor about my expectations, even if the meaning is partly determined by these things. So ultimately the truth or falsehood of the claim about the man is going to be determined by facts about the man, even if they need to be understood as facts about the man in relation to me and my expectations.

Consider again Scott Sumner’s anti-realism. Scott claims that we cannot distinguish between “our perception of reality, and actual reality.” As I said there, this is right in the sense that we cannot consistently hold the opinion, “This is my opinion about reality, but my opinion is false: reality is actually different.” But we can recognize the distinct meanings in “my perception of reality” and “actual reality.” Scott’s failure to recognize this distinction leads him to suggest on occasion that our beliefs about certain matters are just beliefs about what people in the future will believe. For example, he says in this comment:

You misunderstood Rorty. He is not recommending that you try to trick you colleagues into believing something that is not true. Rather he is merely describing what society regards as true. And who can deny that society tends to regard things as true, when they believe them to be true. No you might say “but what society believes to be true is not always really true.” But Rorty would say that statement means nothing, or else it means that you predict that a future society will have a different view of what is true.

Most people, without even realizing it, assume that there is some sort of “God-like” view of what is “really true” which is separate from what we believe is true, and or will believe in the future to be true. Rorty is an atheist. He believes that what society’s experts believe is the best we can do, the closest we can come to describing reality. Rorty would suggest to Hayek “if you want to convince other economists, use persuasive arguments.” I think that is very reasonable advice. It is what I try to do on this blog.

This is much like Bob Seidensticker’s claim that moral beliefs are beliefs about what society in the future will believe to be moral. In both cases, there is an unacceptable circularity. If our belief is about what people in the future will believe, what are the future people’s beliefs about? There is only one credible explanation here: people’s beliefs are about what they say their beliefs are about, namely the very things they are talking about. Moral beliefs are about whether actions are good or bad, and beliefs describing the world are about the world, not about the people who hold the beliefs, present or future.

This does not imply that there is “some sort of ‘God-like’ view of what is ‘really true’.” It just implies that our beliefs are distinct from other things in the world. Sumner is suggesting that a situation where everyone permanently holds a false belief, forever, is inconceivable. But this is quite conceivable, and we can easily see how it could happen. Just now I counted the cups in my cupboard and there are exactly 14 (if anyone is surprised by that value, most of those are plastic.) It is entirely conceivable, however, that I miscounted. And since I also have some cups that are not currently in the cupboard, if I did, I will probably never get it right, since I will just assume there was some other assortment. And since there is presumably no way for the public to discover the truth about this, society will be permanently deluded about the number of cups in my cupboard at 11:16 AM on July 14, 2018.

What would it mean, then, if it was not “actually true” that there were 14 cups in my cupboard? It would be determined by facts about the cups and the cupboard, not by facts about me, about society, about my expectations or society’s expectations. It would not be actually true if there were, in fact, only 13 cups there, even if no one would ever know this.

This all remains related to expectations, however. I don’t think I miscounted, so I think that if I go again and count them, I will get 14 again. If I did miscount, there is a good chance that counting again would result in a different number. Now I don’t intend to bother, so this expectation is counterfactual. Nonetheless, there are at least conceivable counterfactual situations where I would discover that I was mistaken. In a similar way, to say that the Mormon holds false religious beliefs implies that there are at least counterfactual situations where the falsehood would be uncovered by a violation of his expectations: e.g. if he had been alive at the time and had followed Joseph Smith around all the time to see whether there were any golden plates and so on.

Nonetheless, one cannot ultimately separate the truth of a statement from facts about the thing the statement is about. A statement is true if things are the way it says they are, “the way” being correctly understood here. If this were not the case, the statement would not be about the things in the first place.

Advertisements

Truth and Expectation II

We discussed this topic in a previous post. I noted there that there is likely some relationship with predictive processing. This idea can be refined by distinguishing between conscious thought and what the human brain does on a non-conscious level.

It is not possible to define truth by reference to expectations for reasons given previously. Some statements do not imply specific expectations, and besides, we need the idea of truth to decide whether or not someone’s expectations were correct or not. So there is no way to define truth except the usual way: a statement is true if things are the way the statement says they are, bearing in mind the necessary distinctions involving “way.”

On the conscious level, I would distinguish between thinking about something is true, and wanting to think that it is true. In a discussion with Angra Mainyu, I remarked that insofar as we have an involuntary assessment of things, it would be more appropriate to call that assessment a desire:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

Angra was quite surprised by this and responded that “That statement gives me evidence that we’re probably not talking about the same or even similar psychological phenomena – i.e., we’re probably talking past each other.” But if he was talking about anything that anyone at all would characterize as a belief (and he said that he was), he was surely talking about the unshakeable gut sense that something is the case whether or not I want to admit it. So we were, in fact, talking about exactly the same psychological phenomena. I was claiming then, and will claim now, that this gut sense is better characterized as a desire than as a belief. That is, insofar as desire is a tendency to behave in certain ways, it is a desire because it is a tendency to act and think as though this claim is true. But we can, if we want, resist that tendency, just as we can refrain from going to get food when we are hungry. If we do resist, we will refrain from believing what we have a tendency to believe, and if we do not, we will believe what we have a tendency to believe. But the tendency will be there whether or not we follow it.

Now if we feel a tendency to think that something is true, it is quite likely that it seems to us that it would improve our expectations. However, we can also distinguish between desiring to believe something for this reason, or desiring to believe something for other reasons. And although we might not pay attention, it is quite possibly to be consciously aware that you have an inclination to believe something, and also that it is for non-truth related reasons; and thus you would not expect it to improve your expectations.

But this is where it is useful to distinguish between the conscious mind and what the brain is doing on another level. My proposal: you will feel the desire to think that something is true whenever your brain guesses that its predictions, or at least the predictions that are important to it, will become more accurate if you think that the thing is true. We do not need to make any exceptions. This will be the case even when we would say that the statement does not imply any significant expectations, and will be the case even when the belief would have non-truth related motives.

Consider the statement that there are stars outside the visible universe. One distinction we could make even on the conscious level is that this implies various counterfactual predictions: “If you are teleported outside the visible universe, you will see more stars that aren’t currently visible.” Now we might find this objectionable if we were trying to define truth by expectations, since we have no expectation of such an event. But both on conscious and on non-conscious levels, we do need to make counterfactual predictions in order to carry on with our lives, since this is absolutely essential to any kind of planning and action. Now certainly no one can refute me if I assert that you would not see any such stars in the teleportation event. But it is not surprising if my brain guesses that this counterfactual prediction is not very accurate, and thus I feel the desire to say that there are stars there.

Likewise, consider the situation of non-truth related motives. In an earlier discussion of predictive processing, I suggested that the situation where people feel like they have to choose a goal is a result of such an attempt at prediction. Such a choice seems to be impossible, since choice is made in view of a goal, and if you do not have one yet, how can you choose? But there is a pre-existing goal here on the level of the brain: it wants to know what it is going to do. And choosing a goal will serve that pre-existing goal. Once you choose a goal, it will then be easy to know what you are going to do: you are going to do things that promote the goal that you chose. In a similar way, following any desire will improve your brain’s guesses about what you are going to do. It follows that if you have a desire to believe something, actually believing it will improve your brain’s accuracy at least about what it is going to do. This is true but not a fair argument, however, since my proposal is that the brain’s guess of improved accuracy is the cause of your desire to believe something. It is true that if you already have the desire, giving in to it will improve accuracy, as with any desire. But in my theory the improved accuracy had to be implied first, in order to cause the desire.

The answer is that you have many desires for things other than belief, which at the same time give you a motive (not an argument) for believing things. And your brain understands that if you believe the thing, you will be more likely to act on those other desires, and this will minimize uncertainty, and improve the accuracy of its predictions. Consider this discussion of truth in religion. I pointed out there that people confuse two different questions: “what should I do?”, and “what is the world like?” In particular with religious and political loyalties, there can be an intense social pressure towards conformity. And this gives an obvious non-truth related motive to believe the things in question. But in a less obvious way, it means that your brain’s predictions will be more accurate if you believe the thing. Consider the Mormon, and take for granted that the religious doctrines in question are false. Since they are false, does not that mean that if they continue to believe, their predictions will be less accurate?

No, it does not, for several reasons. In the first place the doctrines are in general formulated to avoid such false predictions, at least about everyday life. There might be a false prediction about what will happen when you die, but that is in the future and is anyway disconnected from your everyday life. This is in part why I said “the predictions that are important to it” in my proposal. Second, failure to believe would lead to extremely serious conflicting desires: the person would still have the desire to conform outwardly, but would also have good logical reasons to avoid conformity. And since we don’t know in advance how we will respond to conflicting desires, the brain will not have a good idea of what it would do in that situation. In other words, the Mormon is living a good Mormon life. And their brain is aware that insisting that Mormonism is true is a very good way to make sure that they keep living that life, and therefore continue to behave predictably, rather than falling into a situation of strongly conflicting desires where it would have little idea of what it would do. In this sense, insisting that Mormonism is true, even though it is not, actually improves the brain’s predictive accuracy.

 

Skeptical Scenarios

I promised to return to some of the issues discussed here. The current post addresses the implications of the sort of skeptical scenario considered by Alexander Pruss in the associated discussion. Consider his original comparison of physical theories and skeptical scenarios:

The ordinary sentence “There are four chairs in my office” is true (in its ordinary context). Furthermore, its being true tells us very little about fundamental ontology. Fundamental physical reality could be made out of a single field, a handful of fields, particles in three-dimensional space, particles in ten-dimensional space, a single vector in a Hilbert space, etc., and yet the sentence could be true.

An interesting consequence: Even if in fact physical reality is made out of particles in three-dimensional space, we should not analyze the sentence to mean that there are four disjoint pluralities of particles each arranged chairwise in my office. For if that were what the sentence meant, it would tell us about which of the fundamental physical ontologies is correct. Rather, the sentence is true because of a certain arrangement of particles (or fields or whatever).

If there is such a broad range of fundamental ontologies that “There are four chairs in my office” is compatible with, it seems that the sentence should also be compatible with various sceptical scenarios, such as that I am a brain in a vat being fed data from a computer simulation. In that case, the chair sentence would be true due to facts about the computer simulation, in much the way that “There are four chairs in this Minecraft house” is true. It would be very difficult to be open to a wide variety of fundamental physics stories about the chair sentence without being open to the sentence being true in virtue of facts about a computer simulation.

If we consider this in light of our analysis of form, it is not difficult to see that Pruss is correct both about the ordinary chair sentence being consistent with a large variety of physical theories, and about the implication that it is consistent with most situations that would normally be considered “skeptical.” The reason is that to say that something is a chair is to say something about its relationships with the world, but it is not to say everything about its relationships. It speaks in particular about various relationships with the human world. And there is nothing to prevent these relationships from co-existing with any number of other kinds of relationships between its parts, its causes, and so on.

Pruss is right to insist that in order for the ordinary sentence to be true, the corresponding forms must be present. But as an anti-reductionist, his position implies hidden essences, and this is a mistake. Indeed, under the correct understanding of form, our everyday knowledge of things is sufficient to ensure that the forms are present: regardless of which physical theories turn out to be true, and even if some such skeptical scenario turns out to be true.

Why are these situations called “skeptical” in the first place? This is presumably because they seem to call into question whether or not we possess any knowledge of things. And in this respect, they fail in two ways, they partially fail in a third, and they succeed in one way.

First, they fail insofar as they attempt to call into question, e.g. whether there are chairs in my room right now, or whether I have two hands. These things are true and would be true even in the “skeptical” situations.

Second, they fail even insofar as they claim, e.g. that I do not know whether I am a brain in a vat. In the straightforward sense, I do know this, because the claim is opposed to the other things (e.g. about the chairs and my hands) that I know to be true.

Third, they partially fail even insofar as they claim, e.g. that I do not know whether I am a brain in a vat in a metaphysical sense. Roughly speaking, I do know that I am not, not by deducing the fact with any kind of necessity, but simply because the metaphysical claim is completely ungrounded. In other words, I do not know this infallibly, but it is extremely likely. We could compare this with predictions about the future. Thus for example Ron Conte attempts to predict the future:

First, an overview of the tribulation:
A. The first part of the tribulation occurs for this generation, beginning within the next few years, and ending in 2040 A.D.
B. Then there will be a brief period of peace and holiness on earth, lasting about 25 years.
C. The next few hundred years will see a gradual but unstoppable increase in sinfulness and suffering in the world. The Church will remain holy, and Her teaching will remain pure. But many of Her members will fall into sin, due to the influence of the sinful world.
D. The second part of the tribulation occurs in the early 25th century (about 2430 to 2437). The Antichrist reigns for less than 7 years during this time.
E. Jesus Christ returns to earth, ending the tribulation.

Now, some predictions for the near future. These are not listed in chronological order.

* The Warning, Consolation, and Miracle — predicted at Garabandal and Medjugorje — will occur prior to the start of the tribulation, sometime within the next several years (2018 to 2023).
* The Church will experience a severe schism. First, a conservative schism will occur, under Pope Francis; next, a liberal schism will occur, under his conservative successor.
* The conservative schism will be triggered by certain events: Amoris Laetitia (as we already know, so, not a prediction), and the approval of women deacons, and controversial teachings on salvation theology.
* After a short time, Pope Francis will resign from office.
* His very conservative successor will reign for a few years, and then die a martyr, during World War 3.
* The successor to Pope Francis will take the papal name Pius XIII.

Even ignoring the religious speculation, we can “know” that this account is false, simply because it is inordinately detailed. Ron Conte no doubt has reasons for his beliefs, much as the Jehovah’s Witnesses did. But just as we saw in that case, his reasons will also in all likelihood turn out to be completely disproportionate to the detail of the claims they seek to establish.

In a similar way, a skeptical scenario can be seen as painting a detailed picture of a larger context of our world, one outside our current knowledge. There is nothing impossible about such a larger context; in fact, there surely is one. But the claim about brains and vats is very detailed: if one takes it seriously, it is more detailed than Ron Conte’s predictions, which could also be taken as a statement about a larger temporal context to our situation. The brain-in-vat scenario implies that our entire world depends on another world which has things similar to brains and similar to vats, along presumably with things analogous to human beings that made the vats, and so on. And since the whole point of the scenario is that it is utterly invented, not that it is accepted by anyone, while Conte’s account is accepted at least by him, there is not even a supposed basis for thinking that things are actually this way. Thus we can say, not infallibly but with a great deal of certainty, that we are not brains in vats, just as we can say, not infallibly but with a great deal of certainty, that there will not be any “Antichrist” between 2430 and 2437.

There is nonetheless one way in which the consideration of skeptical scenarios does succeed in calling our knowledge into question. Consider them insofar as they propose a larger context to our world, as discussed above. As I said, there is nothing impossible about a larger context, and there surely is one. Here we speak of a larger metaphysical context, but we can compare this with the idea of a larger physical context.

Our knowledge of our physical context is essentially local, given the concrete ways that we come to know the world. I know a lot about the room I am in, a significant amount about the places I usually visit or have visited in the past, and some but much less about places I haven’t visited. And speaking of an even larger physical context, I know things about the solar system, but much less about the wider physical universe. And if we consider what lies outside the visible universe, I might well guess that there are more stars and galaxies and so on, but nothing more. There is not much more detail even to this as a guess: and if there is an even larger physical context, it is possible that there are places that do not have stars and galaxies at all, but other things. In other words, universal knowledge is universal, but also vague, while specific knowledge is more specific, but also more localized: it is precisely because it is local that it was possible to acquire more specific knowledge.

In a similar way, more specific metaphysical knowledge is necessarily of a more local metaphysical character: both physical and metaphysical knowledge is acquired by us through the relationships things have with us, and in both cases “with us” implies locality. We can know that the brain-in-vat scenario is mistaken, but that should not give us hope that we can find out what is true instead: even if we did find some specific larger metaphysical context to our situation, there would be still larger contexts of which we would remain unaware. Just as you will never know the things that are too distant from you physically, you will also never know the things that are too distant from you metaphysically.

I previously advocated patience as a way to avoid excessively detailed claims. There is nothing wrong with this, but here we see that it is not enough: we also need to accept our actual situation. Rebellion against our situation, in the form of painting a detailed picture of a larger context of which we can have no significant knowledge, will profit us nothing: it will just be painting a picture as false as the brain-in-vat scenario, and as false as Ron Conte’s predictions.

Self Reference Paradox Summarized

Hilary Lawson is right to connect the issue of the completeness and consistency of truth with paradoxes of self-reference.

As a kind of summary, consider this story:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:
etc.

In this form, the story obviously exists, but in its implied form, the story cannot be told, because for the story to be “told” is for it to be completed, and it is impossible for it be completed, since it will not be complete until it contains itself, and this cannot happen.

Consider a similar example. You sit in a room at a desk, and decide to draw a picture of the room. You draw the walls. Then you draw yourself and your desk. But then you realize, “there is also a picture in the room. I need to draw the picture.” You draw the picture itself as a tiny image within the image of your desktop, and add tiny details: the walls of the room, your desk and yourself.

Of course, you then realize that your artwork can never be complete, in exactly the same way that the story above cannot be complete.

There is essentially the same problem in these situations as in all the situations we have described which involve self-reference: the paradox of the liar, the liar game, the impossibility of detailed future prediction, the list of all true statementsGödel’s theorem, and so on.

In two of the above posts, namely on future prediction and Gödel’s theorem, there are discussions of James Chastek’s attempts to use the issue of self-reference to prove that the human mind is not a “mechanism.” I noted in those places that such supposed proofs fail, and at this point it is easy to see that they will fail in general, if they depend on such reasoning. What is possible or impossible here has nothing to do with such things, and everything to do with self-reference. You cannot have a mirror and a camera so perfect that you can get an actually infinite series of images by taking a picture of the mirror with the camera, but there is nothing about such a situation that could not be captured by an image outside the situation, just as a man outside the room could draw everything in the room, including the picture and its details. This does not show that a man outside the room has a superior drawing ability compared with the man in the room. The ability of someone else to say whether the third statement in the liar game is true or false does not prove that the other person does not have a “merely human” mind (analogous to a mere mechanism), despite the fact that you yourself cannot say whether it is true or false.

There is a grain of truth in Chastek’s argument, however. It does follow that if someone says that reality as a whole is a formal system, and adds that we can know what that system is, their position would be absurd, since if we knew such a system we could indeed derive a specific arithmetical truth, namely one that we could state in detail, which would be unprovable from the system, namely from reality, but nonetheless proved to be true by us. And this is logically impossible, since we are a part of reality.

At this point one might be tempted to say, “At this point we have fully understood the situation. So all of these paradoxes and so on don’t prevent us from understanding reality perfectly, even if that was the original appearance.”

But this is similar to one of two things.

First, a man can stand outside the room and draw a picture of everything in it, including the picture, and say, “Behold. A picture of the room and everything in it.” Yes, as long as you are not in the room. But if the room is all of reality, you cannot get outside it, and so you cannot draw such a picture.

Second, the man in the room can draw the room, the desk and himself, and draw a smudge on the center of the picture of the desk, and say, “Behold. A smudged drawing of the room and everything in it, including the drawing.” But one only imagines a picture of the drawing underneath the smudge: there is actually no such drawing in the picture of the room, nor can there be.

In the same way, we can fully understand some local situation, from outside that situation, or we can have a smudged understanding of the whole situation, but there cannot be any detailed understanding of the whole situation underneath the smudge.

I noted that I disagreed with Lawson’s attempt to resolve the question of truth. I did not go into detail, and I will not, as the book is very long and an adequate discussion would be much longer than I am willing to attempt, at least at this time, but I will give some general remarks. He sees, correctly, that there are problems both with saying that “truth exists” and that “truth does not exist,” taken according to the usual concept of truth, but in the end his position amounts to saying that the denial of truth is truer than the affirmation of truth. This seems absurd, and it is, but not quite so much as appears, because he does recognize the incoherence and makes an attempt to get around it. The way of thinking is something like this: we need to avoid the concept of truth. But this means we also need to avoid the concept of asserting something, because if you assert something, you are saying that it is true. So he needs to say, “assertion does not exist,” but without asserting it. Consequently he comes up with the concept of “closure,” which is meant to replace the concept of asserting, and “asserts” things in the new sense. This sense is not intended to assert anything at all in the usual sense. In fact, he concludes that language does not refer to the world at all.

Apart from the evident absurdity, exacerbated by my own realist description of his position, we can see from the general account of self-reference why this is the wrong answer. The man in the room might start out wanting to draw a picture of the room and everything in it, and then come to realize that this project is impossible, at least for someone in his situation. But suppose he concludes: “After all, there is no such thing as a picture. I thought pictures were possible, but they are not. There are just marks on paper.” The conclusion is obviously wrong. The fact that pictures are things themselves does prevent pictures from being exhaustive pictures of themselves, but it does not prevent them from being pictures in general. And in the same way, the fact that we are part of reality prevents us from having an exhaustive understanding of reality, but it does not prevent us from understanding in general.

There is one last temptation in addition to the two ways discussed above of saying that there can be an exhaustive drawing of the room and the picture. The room itself and everything in it, is itself an exhaustive representation of itself and everything in it, someone might say. Apart from being an abuse of the word “representation,” I think this is delusional, but this a story for another time.

Truth in Ordinary Language

After the incident with the tall man, I make plans to meet my companion the following day. “Let us meet at sunrise tomorrow,” I say. They ask in response, “How will I know when the sun has risen?”

When it is true to say that the sun will rise, or that the sun has risen? And what it would take for such statements to be false?

Virtually no one finds themselves uncomfortable with this language despite the fact that the sun has no physical motion called “rising,” but rather the earth is rotating, giving the appearance of movement to the sun. I will ignore issues of relativity, precisely because they are evidently irrelevant. It is not just that the sun is not moving, but that we know that the physical motion of the sun one way or another is irrelevant. The rising of the sun has nothing to do with a deep physical or metaphysical account of the sun as such. Instead, it is about that thing that happens every morning. What would it take for it to be false that the sun will rise tomorrow? Well, if the earth is destroyed today, then presumably the sun will not rise tomorrow. Or if tomorrow it is dark at noon and everyone on Twitter is on an uproar about the fact that the sun is visible at the height of the sky at midnight in their part of the world, then it will have been false that the sun was going to rise in the morning. In other words, the only possible thing that could falsify the claim about the sun would be a falsification of our expectations about our experience of the sun.

As in the last post, however, this does not mean that the statement about the sun is about our expectations. It is about the sun. But the only thing it says about the sun is something like, “The sun will be and do whatever it needs to, including in relative terms, in order for our ordinary experience of a sunrise to be as it usually is.” I said something similar here about the truth of attributions of sensible qualities, such as when we say that “the banana is yellow.”

All of this will apply in general to all of our ordinary language about ourselves, our lives, and the world.

Truth and Expectation

Suppose I see a man approaching from a long way off. “That man is pretty tall,” I say to a companion. The man approaches, and we meet him. Now I can see how tall he is. Suppose my companion asks, “Were you right that the man is pretty tall, or were you mistaken?”

“Pretty tall,” of course, is itself “pretty vague,” and there surely is not some specific height in inches that would be needed in order for me to say that I was right. What then determines my answer? Again, I might just respond, “It’s hard to say.” But in some situations I would say, “yes, I was definitely right,” or “no, I was definitely wrong.” What are those situations?

Psychologically, I am likely to determine the answer by how I feel about what I know about the man’s height now, compared to what I knew in advance. If I am surprised at how short he is, I am likely to say that I was wrong. And if I am not surprised at all by his height, or if I am surprised at how tall he is, then I am likely to say that I was right. So my original pretty vague statement ends up being made somewhat more precise by being placed in relationship with my expectations. Saying that he is pretty tall implies that I have certain expectations about his height, and if those expectations are verified, then I will say that I was right, and if those expectations are falsified, at least in a certain direction, then I will say that I was wrong.

This might suggest a theory like logical positivism. The meaning of a statement seems to be defined by the expectations that it implies. But it seems easy to find a decisive refutation of this idea. “There are stars outside my past and future light cones,” for example, is undeniably meaningful, and we know what it means, but it does not seem to imply any particular expectations about what is going to happen to me.

But perhaps we should simply somewhat relax the claim about the relationship between meaning and expectations, rather than entirely retracting it. Consider the original example. Obviously, when I say, “that man is pretty tall,” the statement is a statement about the man. It is not a statement about what is going to happen to me. So it is incorrect to say that the meaning of the statement is the same as my expectations. Nonetheless, the meaning in the example receives something, at the least some of its precision, from my expectations. Different people will be surprised by different heights in such a case, and it will be appropriate to say that they disagree somewhat about the meaning of “pretty tall.” But not because they had some logical definition in their minds which disagreed with the definition in someone’s else’s mind. Instead, the difference of meaning is based on the different expectations themselves.

But does a statement always receive some precision in its meaning from expectation, or are there cases where nothing at all is received from one’s expectations? Consider the general claim that “X is true.” This in fact implies some expectations: I do not expect “someone omniscient will tell me that X is false.” I do not expect that “someone who finds out the truth about X will tell me that X is false.” I do not expect that “I will discover the truth about X and it will turn out that it was false.” Note that these expectations are implied even in cases like the claim about the stars and my future light cone. Now the hopeful logical positivist might jump in at this point and say, “Great. So why can’t we go back to the idea that meaning is entirely defined by expectations?” But returning to that theory would be cheating, so to speak, because these expectations include the abstract idea of X being true, so this must be somehow meaningful apart from these particular expectations.

These expectations do, however, give the vaguest possible framework in which to make a claim at all. And people do, sometimes, make claims with little expectation of anything besides these things, and even with little or no additional understanding of what they are talking about. For example, in the cases that Robin Hanson describes as “babbling,” the person understands little of the implications of what he is saying except the idea that “someone who understood this topic would say something like this.” Thus it seems reasonable to say that expectations do always contribute something to making meaning more precise, even if they do not wholly constitute one’s meaning. And this consequence seems pretty natural if it is true that expectation is itself one of the most fundamental activities of a mind.

Nonetheless, the precision that can be contributed in this way will never be an infinite precision, because one’s expectations themselves cannot be defined with infinite precision. So whether or not I am surprised by the man’s height in the original example, may depend in borderline cases on what exactly happens during the time between my original assessment and the arrival of the man. “I will be surprised” or “I will not be surprised” are in themselves contingent facts which could depend on many factors, not only on the man’s height. Likewise, whether or not my state actually constitutes surprise will itself be something that has borderline cases.

Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.