Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.

Advertisements

Idealized Idealization

On another occasion, I discussed the Aristotelian idea that the act of the mind does not use an organ. In an essay entitled Immaterial Aspects of Thought, James Ross claims that he can establish the truth of this position definitively. He summarizes the argument:

Some thinking (judgment) is determinate in a way no physical process can be. Consequently, such thinking cannot be (wholly) a physical process. If all thinking, all judgment, is determinate in that way, no physical process can be (the whole of) any judgment at all. Furthermore, “functions” among physical states cannot be determinate enough to be such judgments, either. Hence some judgments can be neither wholly physical processes nor wholly functions among physical processes.

Certain thinking, in a single case, is of a definite abstract form (e.g. N x N = N²), and not indeterminate among incompossible forms (see I below). No physical process can be that definite in its form in a single case. Adding cases even to infinity, unless they are all the possible cases, will not exclude incompossible forms. But supplying all possible cases of any pure function is impossible. So, no physical process can exclude incompossible functions from being equally well (or badly) satisfied (see II below). Thus, no physical process can be a case of such thinking. The same holds for functions among physical states (see IV below).

In essence, the argument is that squaring a number and similar things are infinitely precise processes, and no physical process is infinitely precise. Therefore squaring a number and similar things are not physical processes.

The problem is unfortunately with the major premise here. Squaring a number, and similar things, in the way that we in fact do them, are not infinitely precise processes.

Ross argues that they must be:

Can judgments really be of such definite “pure” forms? They have to be; otherwise, they will fail to have the features we attribute to them and upon which the truth of certain judgments about validity, inconsistency, and truth depend; for instance, they have to exclude incompossible forms or they would lack the very features we take to be definitive of their sorts: e.g., conjunction, disjunction, syllogistic, modus ponens, etc. The single case of thinking has to be of an abstract “form” (a “pure” function) that is not indeterminate among incompossible ones. For instance, if I square a number–not just happen in the course of adding to write down a sum that is a square, but if I actually square the number–I think in the form “N x N = N².”

The same point again. I can reason in the form, modus ponens (“If p then q“; “p“; “therefore, q”). Reasoning by modus ponens requires that no incompossible forms also be “realized” (in the same sense) by what I have done. Reasoning in that form is thinking in a way that is truth-preserving for all cases that realize the form. What is done cannot, therefore, be indeterminate among structures, some of which are not truth preserving. That is why valid reasoning cannot be only an approximation of the form, but must be of the form. Otherwise, it will as much fail to be truth-preserving for all relevant cases as it succeeds; and thus the whole point of validity will be lost. Thus, we already know that the evasion, “We do not really conjoin, add, or do modus ponens but only simulate them,” cannot be correct. Still, I shall consider it fully below.

“It will as much fail to be truth-preserving for all relevant cases as it succeeds” is an exaggeration here. If you perform an operation which approximates modus ponens, then that operation will be approximately truth preserving. It will not be equally truth preserving and not truth preserving.

I have noted many times in the past, as for example here, here, here, and especially here, that following the rules of syllogism does not in practice infallibly guarantee that your conclusions are true, even if your premises are in some way true, because of the vagueness of human thought and language. In essence, Ross is making a contrary argument: we know, he is claiming, that our arguments infallibly succeed; therefore our thoughts cannot be vague. But it is empirically false that our arguments infallibly succeed, so the argument is mistaken right from its starting point.

There is also a strawmanning of the opposing position here insofar as Ross describes those who disagree with him as saying that “we do not really conjoin, add, or do modus ponens but only simulate them.” This assumes that unless you are doing these things perfectly, rather than approximating them, then you are not doing them at all. But this does not follow. Consider a triangle drawn on a blackboard. Consider which of the following statements is true:

  1. There is a triangle drawn on the blackboard.
  2. There is no triangle drawn on the blackboard.

Obviously, the first statement is true, and the second false. But in Ross’s way of thinking, we would have to say, “What is on the blackboard is only approximately triangular, not exactly triangular. Therefore there is no triangle on the blackboard.” This of course is wrong, and his description of the opposing position is wrong in the same way.

Naturally, if we take “triangle” as shorthand for “exact rather than approximate triangle” then (2) will be true. And in a similar way, if take “really conjoin” and so on as shorthand for “really conjoin exactly and not approximately,” then those who disagree will indeed say that we do not do those things. But this is not a problem unless you are assuming from the beginning that our thoughts are infinitely precise, and Ross is attempting to establish that this must be the case, rather than claiming to take it as given. (That is, the summary takes it as given, but Ross attempts throughout the article to establish it.)

One could attempt to defend Ross’s position as follows: we must have infinitely precise thoughts, because we can understand the words “infinitely precise thoughts.” Or in the case of modus ponens, we must have an infinitely precise understanding of it, because we can distinguish between “modus ponens, precisely,” and “approximations of modus ponens“. But the error here is similar to the error of saying that one must have infinite certainty about some things, because otherwise one will not have infinite certainty about the fact that one does not have infinite certainty, as though this were a contradiction. It is no contradiction for all of your thoughts to be fallible, including this one, and it is no contradiction for all of your thoughts to be vague, including your thoughts about precision and approximation.

The title of this post in fact refers to this error, which is probably the fundamental problem in Ross’s argument. Triangles in the real world are not perfectly triangular, but we have an idealized concept of a triangle. In precisely the same way, the process of idealization in the real world is not an infinitely precise process, but we have an idealized concept of idealization. Concluding that our acts of idealization must actually be ideal in themselves, simply because we have an idealized concept of idealization, would be a case of confusing the way of knowing with the way of being. It is a particularly confusing case simply because the way of knowing in this case is also materially the being which is known. But this material identity does not make the mode of knowing into the mode of being.

We should consider also Ross’s minor premise, that a physical process cannot be determinate in the way required:

Whatever the discriminable features of a physical process may be, there will always be a pair of incompatible predicates, each as empirically adequate as the other, to name a function the exhibited data or process “satisfies.” That condition holds for any finite actual “outputs,” no matter how many. That is a feature of physical process itself, of change. There is nothing about a physical process, or any repetitions of it, to block it from being a case of incompossible forms (“functions”), if it could be a case of any pure form at all. That is because the differentiating point, the point where the behavioral outputs diverge to manifest different functions, can lie beyond the actual, even if the actual should be infinite; e.g., it could lie in what the thing would have done, had things been otherwise in certain ways. For instance, if the function is x(*)y = (x + y, if y < 10^40 years, = x + y +1, otherwise), the differentiating output would lie beyond the conjectured life of the universe.

Just as rectangular doors can approximate Euclidean rectangularity, so physical change can simulate pure functions but cannot realize them. For instance, there are no physical features by which an adding machine, whether it is an old mechanical “gear” machine or a hand calculator or a full computer, can exclude its satisfying a function incompatible with addition, say quaddition (cf. Kripke’s definition of the function to show the indeterminacy of the single case: quus, symbolized by the plus sign in a circle, “is defined by: x quus y = x + y, if x, y < 57, =5 otherwise”) modified so that the differentiating outputs (not what constitutes the difference, but what manifests it) lie beyond the lifetime of the machine. The consequence is that a physical process is really indeterminate among incompatible abstract functions.

Extending the list of outputs will not select among incompatible functions whose differentiating “point” lies beyond the lifetime (or performance time) of the machine. That, of course, is not the basis for the indeterminacy; it is just a grue-like illustration. Adding is not a sequence of outputs; it is summing; whereas if the process were quadding, all its outputs would be quadditions, whether or not they differed in quantity from additions (before a differentiating point shows up to make the outputs diverge from sums).

For any outputs to be sums, the machine has to add. But the indeterminacy among incompossible functions is to be found in each single case, and therefore in every case. Thus, the machine never adds.

There is some truth here, and some error here. If we think about a physical process in the particular way that Ross is considering it, it will be true that it will always be able to be interpreted in more than one way. This is why, for example, in my recent discussion with John Nerst, John needed to say that the fundamental cause of things had to be “rules” rather than e.g. fundamental particles. The movement of particles, in itself, could be interpreted in various ways. “Rules,” on the other hand, are presumed to be something which already has a particular interpretation, e.g. adding as opposed to quadding.

On the other hand, there is also an error here. The prima facie sign of this error is the statement that an adding machine “never adds.” Just as according to common sense we can draw triangles on blackboards, so according to common sense the calculator on my desk can certainly add. This is connected with the problem with the entire argument. Since “the calculator can add” is true in some way, there is no particular reason that “we can add” cannot be true in precisely the same way. Ross wishes to argue that we can add in a way that the calculator cannot because, in essence, we do it infallibly; but this is flatly false. We do not do it infallibly.

Considered metaphysically, the problem here is ignorance of the formal cause. If physical processes were entirely formless, they indeed would have no interpretation, just as a formless human (were that possible) would be a philosophical zombie. But in reality there are forms in both cases. In this sense, Ross’s argument comes close to saying “human thought is a form or formed, but physical processes are formless.” Since in fact neither is formless, there is no reason (at least established by this argument) why thought could not be the form of a physical process.

 

Predictive Processing

In a sort of curious coincidence, a few days after I published my last few posts, Scott Alexander posted a book review of Andy Clark’s book Surfing Uncertainty. A major theme of my posts was that in a certain sense, a decision consists in the expectation of performing the action decided upon. In a similar way, Andy Clark claims that the human brain does something very similar from moment to moment. Thus he begins chapter 4 of his book:

To surf the waves of sensory stimulation, predicting the present is simply not enough. Instead, we are built to engage the world. We are built to act in ways that are sensitive to the contingencies of the past, and that actively bring forth the futures that we need and desire. How does a guessing engine (a hierarchical prediction machine) turn prediction into accomplishment? The answer that we shall explore is: by predicting the shape of its own motor trajectories. In accounting for action, we thus move from predicting the rolling present to predicting the near-future, in the form of the not-yet-actual trajectories of our own limbs and bodies. These trajectories, predictive processing suggests, are specified by their distinctive sensory (especially proprioceptive) consequences. In ways that we are about to explore, predicting these (non-actual) sensory states actually serves to bring them about.

Such predictions act as self-fulfilling prophecies. Expecting the flow of sensation that would result were you to move your body so as to keep the surfboard in that rolling sweet spot results (if you happen to be an expert surfer) in that very flow, locating the surfboard right where you want it. Expert prediction of the world (here, the dynamic ever-changing waves) combines with expert prediction of the sensory flow that would, in that context, characterize the desired action, so as to bring that action about.

There is a great deal that could be said about the book, and about this theory, but for the moment I will content myself with remarking on one of Scott Alexander’s complaints about the book, and making one additional point. In his review, Scott remarks:

In particular, he’s obsessed with showing how “embodied” everything is all the time. This gets kind of awkward, since the predictive processing model isn’t really a natural match for embodiment theory, and describes a brain which is pretty embodied in some ways but not-so-embodied in others. If you want a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”, this is your book.

I did not find Clark obsessed with this, and I think it would be hard to reasonably describe any hundred pages in the book as devoted to this particular topic. This inclines to me to suggest that Scott may be irritated by such discussion of the topic that comes up because it does not seem relevant to him. I will therefore explain the relevance, namely in relation to a different difficulty which Scott discusses in another post:

There’s something more interesting in Section 7.10 of Surfing Uncertainty [actually 8.10], “Escape From The Darkened Room”. It asks: if the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”.

Section 7.10 [8.10] gives a kind of hand-wave-y answer here, saying that of course organisms have some drives, and probably it makes sense for them to desire novelty and explore new options, and so on. Overall this isn’t too different from PCT’s idea of “intrinsic error”, and as long as we remember that it’s not really predicting anything in particular it seems like a fair response.

Clark’s response may be somewhat “hand-wave-y,” but I think the response might seem slightly more problematic to Scott than it actually is, precisely because he does not understand the idea of embodiment, and how it applies to this situation.

If we think about predictions on a general intellectual level, there is a good reason not to predict that you will not eat something soon. If you do predict this, you will turn out to be wrong, as is often discovered by would-be adopters of extreme fasts or diets. You will in fact eat something soon, regardless of what you think about this; so if you want the truth, you should believe that you will eat something soon.

The “darkened room” problem, however, is not about this general level. The argument is that if the brain is predicting its actions from moment to moment on a subconscious level, then if its main concern is getting accurate predictions, it could just predict an absence of action, and carry this out, and its predictions would be accurate. So why does this not happen? Clark gives his “hand-wave-y” answer:

Prediction-error-based neural processing is, we have seen, part of a potent recipe for multi-scale self-organization. Such multiscale self-organization does not occur in a vacuum. Instead, it operates only against the backdrop of an evolved organismic (neural and gross-bodily) form, and (as we will see in chapter 9) an equally transformative backdrop of slowly accumulated material structure and cultural practices: the socio-technological legacy of generation upon generation of human learning and experience.

To start to bring this larger picture into focus, the first point to notice is that explicit, fast timescale processes of prediction error minimization must answer to the needs and projects of evolved, embodied, and environmentally embedded agents. The very existence of such agents (see Friston, 2011b, 2012c) thus already implies a huge range of structurally implicit creature-specific ‘expectations’. Such creatures are built to seek mates, to avoid hunger and thirst, and to engage (even when not hungry and thirsty) in the kinds of sporadic environmental exploration that will help prepare them for unexpected environmental shifts, resource scarcities, new competitors, and so on. On a moment-by-moment basis, then, prediction error is minimized only against the backdrop of this complex set of creature-defining ‘expectations’.”

In one way, the answer here is a historical one. If you simply ask the abstract question, “would it minimize prediction error to predict doing nothing, and then to do nothing,” perhaps it would. But evolution could not bring such a creature into existence, while it was able to produce a creature that would predict that it would engage the world in various ways, and then would proceed to engage the world in those ways.

The objection, of course, would not be that the creature of the “darkened room” is possible. The objection would be that since such a creature is not possible, it must be wrong to describe the brain as minimizing prediction error. But notice that if you predict that you will not eat, and then you do not eat, you are no more right or wrong than if you predict that you will eat, and then you do eat. Either one is possible from the standpoint of prediction, but only one is possible from the standpoint of history.

This is where being “embodied” is relevant. The brain is not an abstract algorithm which has no content except to minimize prediction error; it is a physical object which works together in physical ways with the rest of the human body to carry out specifically human actions and to live a human life.

On the largest scale of evolutionary history, there were surely organisms that were nourished and reproduced long before there was anything analagous to a mind at work in those organisms. So when mind began to be, and took over some of this process, this could only happen in such a way that it would continue the work that was already there. A “predictive engine” could only begin to be by predicting that nourishment and reproduction would continue, since any attempt to do otherwise would necessarily result either in false predictions or in death.

This response is necessarily “hand-wave-y” in the sense that I (and presumably Clark) do not understand the precise physical implementation. But it is easy to see that it was historically necessary for things to happen this way, and it is an expression of “embodiment” in the sense that “minimize prediction error” is an abstract algorithm which does not and cannot exhaust everything which is there. The objection would be, “then there must be some other algorithm instead.” But this does not follow: no abstract algorithm will exhaust a physical object. Thus for example, animals will fall because they are heavy. Asking whether falling will satisfy some abstract algorithm is not relevant. In a similar way, animals had to be physically arranged in such a way that they would usually eat and reproduce.

I said I would make one additional point, although it may well be related to the above concern. In section 4.8 Clark notes that his account does not need to consider costs and benefits, at least directly:

But the story does not stop there. For the very same strategy here applies to the notion of desired consequences and rewards at all levels. Thus we read that ‘crucially, active inference does not invoke any “desired consequences”. It rests only on experience-dependent learning and inference: experience induces prior expectations, which guide perceptual inference and action’ (Friston, Mattout, & Kilner, 2011, p. 157). Apart from a certain efflorescence of corollary discharge, in the form of downward-flowing predictions, we here seem to confront something of a desert landscape: a world in which value functions, costs, reward signals, and perhaps even desires have been replaced by complex interacting expectations that inform perception and entrain action. But we could equally say (and I think this is the better way to express the point) that the functions of rewards and cost functions are now simply absorbed into a more complex generative model. They are implicit in our sensory (especially proprioceptive) expectations and they constrain behavior by prescribing their distinctive sensory implications.

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.

Decisions as Predictions

Among acts of will, St. Thomas distinguishes intention and choice:

The movement of the will to the end and to the means can be considered in two ways. First, according as the will is moved to each of the aforesaid absolutely and in itself. And thus there are really two movements of the will to them. Secondly, it may be considered accordingly as the will is moved to the means for the sake of the end: and thus the movement of the will to the end and its movement to the means are one and the same thing. For when I say: “I wish to take medicine for the sake of health,” I signify no more than one movement of my will. And this is because the end is the reason for willing the means. Now the object, and that by reason of which it is an object, come under the same act; thus it is the same act of sight that perceives color and light, as stated above. And the same applies to the intellect; for if it consider principle and conclusion absolutely, it considers each by a distinct act; but when it assents to the conclusion on account of the principles, there is but one act of the intellect.

Choice is about the means, such as taking medicine in his example, while intention is about the end, as health in his example. This makes sense in terms of how we commonly use the terms. When we do speak of choosing an end, we are normally considering which of several alternative intermediate ends are better means towards an ultimate end. And thus we are “choosing,” not insofar as the thing is an end, but insofar as it is a means towards a greater end that we intend.

Discussing the human mind, we noted earlier that a thing often seems fairly simple when it is considered in general, but turns out to have a highly complex structure when considered in detail. The same thing will turn out to be the case if we attempt to consider the nature of these acts of will in detail.

Consider the hypothesis that both intention and choice consist basically in beliefs: intention would consist in the belief that one will in fact obtain a certain end, or at least that one will come as close to it as possible. Choice would consist in the belief that one will take, or that one is currently taking, a certain temporally immediate action for the sake of such an end. I will admit immediately that this hypothesis will not turn out to be entirely right, but as we shall see, the consideration will turn out to be useful.

First we will bring forward a number of considerations in favor of the hypothesis, and then, in another post, some criticisms of it.

First, in favor of the hypothesis, we should consider the fact that believing that one will take a certain course of action is virtually inseparable from deciding to take that course of action, and the two are not very clearly distinguishable at all. Suppose someone says, “I intend to take my vacation in Paris, but I believe that I will take it in Vienna instead.” On the face of it, this is nonsense. We might make sense of it by saying that the person really meant to say that he first decided to go to Paris, but then obstacles came up and he realizes that it will not be possible. But in that case, he also changes his decision: he now intends to go to Vienna. It is completely impossible that he currently intends to go to Paris, but fully believes that he will not go, and that he will go to Vienna instead.

Likewise, suppose someone says, “I haven’t yet decided where to take my vacation. But I am quite convinced that I am going to take it in Vienna.” Again, this is almost nonsensical: if he is convinced that he will go to Vienna, we would normally say that he has already made up his mind: it is not true that he has not decided yet. As in the previous case, we might be able to come up with circumstances where someone might say this or something like it. For example, if someone else is attempting to convince him to come to Paris, he might say that he has not yet decided, meaning that he is willing to think about it for a bit, but that he fully expects to end up going to Vienna. But in this case, it is more natural to say that his decision and his certainty that he will go to Vienna are proportional: the only sense in which he hasn’t decided yet, is to the degree that the thinks there is some chance that he will change his mind and go to Paris. Thus if there is no chance at all of that, then he is completely decided, while if he is somewhat unsure, his decision is not yet perfect but partial.

Both of the above cases would fit with the claim that a decision is simply a belief about what one is going to do, although they would not necessarily exclude the possibility that it is a separate thing, even if inseparably connected to the belief.

We can also consider beliefs and decisions as something known from their effects. I noted elsewhere that we recognize the nature of desire from its effect, namely from the fact that when we have a desire, we tend to bring about the thing we desire. Insofar as a decision is a rational desire, the same thing applies to decisions as to other kinds of desires. We would not know decisions as decisions, if we never did the things we have decided to do. Likewise, belief is a fairly abstract object, and it is at least plausible that we would come to know it from its more concrete effects.

Now consider the effects of the decision to go to Vienna, compared to the effects of the belief that you will go to Vienna. Both of them will result in you saying, “I am going to go to Vienna.” And if we look at belief as I suggested in the discussion to this post, namely more or less as treating something as a fact, then belief will have other consequences, such as buying a ticket for Vienna. For if you are treating it as a fact that you are going to go there, either you will buy a ticket, or you will give up the belief. In a similar way, if you have decided to go, either you will buy a ticket, or you will change your decision. So the effects of the belief and the effects of the decision seem to be entirely the same. If we know the thing from its effects, then, it seems we should consider the belief and the decision to be entirely the same.

There is an obvious objection here, but as I said the consideration of objections will come later.

Again, consider a situation where there are two roads, road A and road B, to your destination C. There is a fallen bridge along road B, so road B would not be a good route, while road A is a good route. It is reasonable for a third party who knows that you want to get to C and that you have considered the state of the roads, to conclude that you will take road A. But if this is reasonable for someone else, then it is reasonable for you: you know that you want to get to C, and you know that you have considered the state of the roads. So it is reasonable for you to conclude that you will take road A. Note that this is purely about belief: there was no need for an extra “decision” factor. The conclusion that you will factually take road A is a logical conclusion from the known situation. But now that you are convinced that you will take road A, there is no need for you to consider whether to take road A or road B; there is nothing to decide anymore. Everything is already decided as soon as you come to that conclusion, which is a matter of forming a belief. Once again, it seems as though your belief that you will take road A just is your decision, and there is nothing more to it.

Once again, there is an obvious objection, but it will have to wait until the next post.

Statistical Laws of Choice

I noted in an earlier post the necessity of statistical laws of nature. This will necessarily apply to human actions as a particular case, as I implied there in mentioning the amount of food humans eat in a year.

Someone might object. It was said in the earlier post that this will happen unless there is a deliberate attempt to evade this result. But since we are speaking of human beings, there might well be such an attempt. So for example if we ask someone to choose to raise their right hand or their left hand, this might converge to an average, such as 50% each, or perhaps the right hand 60% of the time, or something of this kind. But presumably someone who starts out with the deliberate intention of avoiding such an average will be able to do so.

Unfortunately, such an attempt may succeed in the short run, but will necessarily fail in the long run, because although it is possible in principle, it would require an infinite knowing power, which humans do not have. As I pointed out in the earlier discussion, attempting to prevent convergence requires longer and longer strings on one side or the other. But if you need to raise your right hand a few trillion times before switching again to your left, you will surely lose track of your situation. Nor can you remedy this by writing things down, or by other technical aids: you may succeed in doing things trillions of times with this method, but if you do it forever, the numbers will also become too large to write down. Naturally, at this point we are only making a theoretical point, but it is nonetheless an important one, as we shall see later.

In any case, in practice people do not tend even to make such attempts, and consequently it is far easier to predict their actions in a roughly statistical manner. Thus for example it would not be hard to discover the frequency with which an individual chooses chocolate ice cream over vanilla.

Mind and Matter

In Book III of On the Soul, Aristotle argues that the intellect does not have a bodily organ:

Therefore, since everything is a possible object of thought, mind in order, as Anaxagoras says, to dominate, that is, to know, must be pure from all admixture; for the co-presence of what is alien to its nature is a hindrance and a block: it follows that it too, like the sensitive part, can have no nature of its own, other than that of having a certain capacity. Thus that in the soul which is called mind (by mind I mean that whereby the soul thinks and judges) is, before it thinks, not actually any real thing. For this reason it cannot reasonably be regarded as blended with the body: if so, it would acquire some quality, e.g. warmth or cold, or even have an organ like the sensitive faculty: as it is, it has none. It was a good idea to call the soul ‘the place of forms’, though (1) this description holds only of the intellective soul, and (2) even this is the forms only potentially, not actually.
Observation of the sense-organs and their employment reveals a distinction between the impassibility of the sensitive and that of the intellective faculty. After strong stimulation of a sense we are less able to exercise it than before, as e.g. in the case of a loud sound we cannot hear easily immediately after, or in the case of a bright colour or a powerful odour we cannot see or smell, but in the case of mind thought about an object that is highly intelligible renders it more and not less able afterwards to think objects that are less intelligible: the reason is that while the faculty of sensation is dependent upon the body, mind is separable from it.

There are two arguments here, one from the fact that the mind can understand at all, and the other from the effect of thinking about highly intelligible things.

St. Thomas explains the first argument:

The following argument may make this point clear. Anything that is in potency with respect to an object, and able to receive it into itself, is, as such, without that object; thus the pupil of the eye, being potential to colours and able to receive them, is itself colourless. But our intellect is so related to the objects it understands that it is in potency with respect to them, and capable of being affected by them (as sense is related to sensible objects). Therefore it must itself lack all those things which of its nature it understands. Since then it naturally understands all sensible and bodily things, it must be lacking in every bodily nature; just as the sense of sight, being able to know colour, lacks all colour. If sight itself had any particular colour, this colour would prevent it from seeing other colours, just as the tongue of a feverish man, being coated with a bitter moisture, cannot taste anything sweet. In the same way then, if the intellect were restricted to any particular nature, this connatural restriction would prevent it from knowing other natures. Hence he says: ‘What appeared inwardly would prevent and impede’ (its knowledge of) ‘what was without’; i.e. it would get in the way of the intellect, and veil it so to say, and prevent it from inspecting other things. He calls ‘the inwardly appearing’ whatever might be supposed to be intrinsic and co-natural to the intellect and which, so long as it ‘appeared’ therein would necessarily prevent the understanding of anything else; rather as we might say that the bitter moisture was an ‘inwardly appearing’ factor in a fevered tongue.

This is similar to St. Thomas’s suggestion elsewhere that matter and understanding are intrinsically opposed to one another. I cautioned the reader there about taking such an argument as definitive too quickly, and I would do the same here. Consider the argument about sensation: it is true enough that the pupil isn’t colored, and that perception of temperature is relative to the temperature of the organ of touch, or some aspects of it, which suggests that heat in the organ impedes the sensation of heat. On the other hand, the optic nerve and the visual cortex are arguably even more necessary to the sense of sight than the pupil, and they most certainly are not colorless. Taking this into consideration, the facts about the pupil, and the way touch functions, and so on, seem like facts that should be taken into consideration, but do not even come to close to establishing as a fact that the intellect does not have an organ.

Likewise, with the second argument, Aristotle is certainly pointing to a difference between the intellect and the senses, even if this argument might need qualification, since one does tire even of thinking. But saying that the intellect is not merely another sense is one thing, and saying that it does not have an organ at all is another.

We previously considered Sean Collins’s discussion Aristotle and the history of science. Following on one of the passages quoted in the linked post, Collins continues:

I said above that Aristotle thinks somewhat Platonically “despite himself.” He himself is very remarkably aware that matter will make a difference in the account of things, even if the extent of the difference remains as yet unknown. And Aristotle makes, in this connection, a distinction which is well known to the scholastic tradition, but not equally well understood: that, namely, between the “logical” consideration of a question, and the “physical” consideration of it. Why make that distinction? Its basis lies in the discovery that matter is a genuine principle. For, on the one hand, the mind and its act are immaterial; but the things to be known in the physical world are material. It becomes necessary, therefore, for the mind to “go out of itself,” as it were, in the effort to know things. This is precisely what gives rise to what is called the “order of concretion.”

But how much “going out of itself” will be necessary, or precisely how that is to be done, is not something that can be known without experience — the experience, as it turns out, not merely of an individual but of an entire tradition of thought. Here I am speaking of history, and history has, indeed, everything to do with what I am talking about. Aristotle’s disciples are not always as perspicacious as their master was. Some of them suppose that they should follow the master blindly in the supposition that history has no significant bearing on the “disciplines.” That supposition amounts, at least implicitly, to a still deeper assumption: the assumption, namely, that the materiality of human nature, and of the cosmos, is not so significant as to warrant a suspicion that historical time is implicated in the material essence of things. Aristotle did not think of time as essentially historical in the sense I am speaking of here. The discovery that it was essentially historical was not yet attainable.

I would argue that Sean Collins should consider how similar considerations would apply to his remark that “the mind and its act are immaterial.” Perhaps we know in a general way that sensation is more immaterial than growth, but we do not think that sensation therefore does not involve an organ. How confident should one be that the mind does not use an organ based on such general considerations? Just as there is a difference between the “logical” consideration of time and motion and their “physical” consideration, so there might be a similar difference between two kinds of consideration of the mind.

Elsewhere, Collins criticizes a certain kind of criticism of science:

We do encounter the atomists, who argue to a certain complexity in material things. Most of our sophomore year’s natural science is taken up with them. But what do we do with them? The only atomists we read are the early ones, who are only just beginning to discover evidence for atoms. The evidence they possess for atoms is still weak enough so that we often think we can take refuge in general statements about the hypothetical nature of modern science. In other words, without much consideration, we are tempted to write modern science off, so that we can get back to this thing we call philosophy.

Some may find that description a little stark, but at any rate, right here at the start, I want to note parenthetically that such a dismissal would be far less likely if we did not often confuse experimental science with the most common philosophical account of contemporary science. That most common philosophical account is based largely on the very early and incomplete developments of science, along with an offshoot of Humean philosophy which came into vogue mainly through Ernst Mach. But if we look at contemporary science as it really is today, and take care to set aside accidental associations it has with various dubious philosophies, we find a completely wonderful and astonishing growth of understanding of the physical structure not only of material substances, but of the entire cosmos. And so while some of us discuss at the lunch table whether the hypothesis of atoms is viable, physicists and engineers around the world make nanotubes and other lovely little structures, even machines, out of actual atoms of various elements such as carbon.

And likewise during such discussions, neuroscientists discuss which parts of the brain are responsible for abstract thought.

When we discussed the mixing of wine and water, we noted how many difficulties could arise when you consider a process in detail, which you might not notice simply with a general consideration. The same thing will certainly happen in the consideration of how the mind works. For example, how am I choosing these words as I type? I do not have the time to consider a vast list of alternatives for each word, even though there would frequently be several possibilities, and sometimes I do think of more than one. Other times I go back and change a word or two, or more. But most of the words are coming to me as though by magic, without any conscious thought. Where is this coming from?

The selection of these words is almost certainly being done by a part of my brain. A sign of this is that those with transcortical motor aphasia have great difficulty selecting words, but do not have a problem with understanding.

This is only one small element of a vast interconnected process which is involved in understanding, thinking, and speaking. And precisely because there is a very complex process here which is not completely understood, the statement, “well, these elements are organic, but there is also some non-organic element involved,” cannot be proved to be false in a scientific manner, at least at this time. But it also cannot be proved to be true, and if it did turn out to be true, there would have to be concrete relationships between that element and all the other elements. What would be the contribution of the immaterial element? What would happen if it were lacking, or if that question does not make sense, because it cannot be lacking, why can it not be lacking?

 

Supreme Good

In Chapter 4 of The Divine Names, Dionysius says:

Now if the Good is above all things (as indeed It is) Its Formless Nature produces all-form; and in It alone Not-Being is an excess of Being, and Lifelessness an excess of Life and Its Mindless state is an excess of Wisdom, and all the Attributes of the Good we express in a transcendent manner by negative images.

Now this is not especially easy to understand. But Dionysius seems to be saying that God does not posses life or mind in a literal sense, but is rather above these things, much as held by Plotinus. Possibly somewhat in contrast, he seems to believe that “Good” is an especially appropriate name for God.

According to the account we have given of being and the good, this is correct. If the good is that towards which things tend, then a necessary being must above all be good, because it has such a deep tendency to be that it cannot not be. Likewise, insofar as the good is understood as a final cause of other things, and thus as an ultimate explanation, while the first cause can have nothing else explaining its existence, it must constitute the supreme good not only in relation to itself, but in relation to all other things as well.