The Self and Disembodied Predictive Processing

While I criticized his claim overall, there is some truth in Scott Alexander’s remark that “the predictive processing model isn’t really a natural match for embodiment theory.” The theory of “embodiment” refers to the idea that a thing’s matter contributes in particular ways to its functioning; it cannot be explained by its form alone. As I said in the previous post, the human mind is certainly embodied in this sense. Nonetheless, the idea of predictive processing can suggest something somewhat disembodied. We can imagine the following picture of Andy Clark’s view:

Imagine the human mind as a person in an underground bunker. There is a bank of labelled computer screens on one wall, which portray incoming sensations. On another computer, the person analyzes the incoming data and records his predictions for what is to come, along with the equations or other things which represent his best guesses about the rules guiding incoming sensations.

As time goes on, his predictions are sometimes correct and sometimes incorrect, and so he refines his equations and his predictions to make them more accurate.

As in the previous post, we have here a “barren landscape.” The person in the bunker originally isn’t trying to control anything or to reach any particular outcome; he is just guessing what is going to appear on the screens. This idea also appears somewhat “disembodied”: what the mind is doing down in its bunker does not seem to have much to do with the body and the processes by which it is obtaining sensations.

At some point, however, the mind notices a particular difference between some of the incoming streams of sensation and the rest. The typical screen works like the one labelled “vision.” And there is a problem here. While the mind is pretty good at predicting what comes next there, things frequently come up which it did not predict. No matter how much it improves its rules and equations, it simply cannot entirely overcome this problem. The stream is just too unpredictable for that.

On the other hand, one stream labelled “proprioception” seems to work a bit differently. At any rate, extreme unpredicted events turn out to be much rarer. Additionally, the mind notices something particularly interesting: small differences to prediction do not seem to make much difference to accuracy. Or in other words, if it takes its best guess, then arbitrarily modifies it, as long as this is by a small amount, it will be just as accurate as its original guess would have been.

And thus if it modifies it repeatedly in this way, it can get any outcome it “wants.” Or in other words, the mind has learned that it is in control of one of the incoming streams, and not merely observing it.

This seems to suggest something particular. We do not have any innate knowledge that we are things in the world and that we can affect the world; this is something learned. In this sense, the idea of the self is one that we learn from experience, like the ideas of other things. I pointed out elsewhere that Descartes is mistaken to think the knowledge of thinking is primary. In a similar way, knowledge of self is not primary, but reflective.

Hellen Keller writes in The World I Live In (XI):

Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory.

When I wanted anything I liked, ice cream, for instance, of which I was very fond, I had a delicious taste on my tongue (which, by the way, I never have now), and in my hand I felt the turning of the freezer. I made the sign, and my mother knew I wanted ice-cream. I “thought” and desired in my fingers.

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me. I thought only of objects, and only objects I wanted. It was the turning of the freezer on a larger scale. When I learned the meaning of “I” and “me” and found that I was something, I began to think. Then consciousness first existed for me.

Helen Keller’s experience is related to the idea of language as a kind of technology of thought. But the main point is that she is quite literally correct in saying that she did not know that she existed. This does not mean that she had the thought, “I do not exist,” but rather that she had no conscious thought about the self at all. Of course she speaks of feeling desire, but that is precisely as a feeling. Desire for ice cream is what is there (not “what I feel,” but “what is”) before the taste of ice cream arrives (not “before I taste ice cream.”)

 

Advertisements

Predictive Processing

In a sort of curious coincidence, a few days after I published my last few posts, Scott Alexander posted a book review of Andy Clark’s book Surfing Uncertainty. A major theme of my posts was that in a certain sense, a decision consists in the expectation of performing the action decided upon. In a similar way, Andy Clark claims that the human brain does something very similar from moment to moment. Thus he begins chapter 4 of his book:

To surf the waves of sensory stimulation, predicting the present is simply not enough. Instead, we are built to engage the world. We are built to act in ways that are sensitive to the contingencies of the past, and that actively bring forth the futures that we need and desire. How does a guessing engine (a hierarchical prediction machine) turn prediction into accomplishment? The answer that we shall explore is: by predicting the shape of its own motor trajectories. In accounting for action, we thus move from predicting the rolling present to predicting the near-future, in the form of the not-yet-actual trajectories of our own limbs and bodies. These trajectories, predictive processing suggests, are specified by their distinctive sensory (especially proprioceptive) consequences. In ways that we are about to explore, predicting these (non-actual) sensory states actually serves to bring them about.

Such predictions act as self-fulfilling prophecies. Expecting the flow of sensation that would result were you to move your body so as to keep the surfboard in that rolling sweet spot results (if you happen to be an expert surfer) in that very flow, locating the surfboard right where you want it. Expert prediction of the world (here, the dynamic ever-changing waves) combines with expert prediction of the sensory flow that would, in that context, characterize the desired action, so as to bring that action about.

There is a great deal that could be said about the book, and about this theory, but for the moment I will content myself with remarking on one of Scott Alexander’s complaints about the book, and making one additional point. In his review, Scott remarks:

In particular, he’s obsessed with showing how “embodied” everything is all the time. This gets kind of awkward, since the predictive processing model isn’t really a natural match for embodiment theory, and describes a brain which is pretty embodied in some ways but not-so-embodied in others. If you want a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”, this is your book.

I did not find Clark obsessed with this, and I think it would be hard to reasonably describe any hundred pages in the book as devoted to this particular topic. This inclines to me to suggest that Scott may be irritated by such discussion of the topic that comes up because it does not seem relevant to him. I will therefore explain the relevance, namely in relation to a different difficulty which Scott discusses in another post:

There’s something more interesting in Section 7.10 of Surfing Uncertainty [actually 8.10], “Escape From The Darkened Room”. It asks: if the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”.

Section 7.10 [8.10] gives a kind of hand-wave-y answer here, saying that of course organisms have some drives, and probably it makes sense for them to desire novelty and explore new options, and so on. Overall this isn’t too different from PCT’s idea of “intrinsic error”, and as long as we remember that it’s not really predicting anything in particular it seems like a fair response.

Clark’s response may be somewhat “hand-wave-y,” but I think the response might seem slightly more problematic to Scott than it actually is, precisely because he does not understand the idea of embodiment, and how it applies to this situation.

If we think about predictions on a general intellectual level, there is a good reason not to predict that you will not eat something soon. If you do predict this, you will turn out to be wrong, as is often discovered by would-be adopters of extreme fasts or diets. You will in fact eat something soon, regardless of what you think about this; so if you want the truth, you should believe that you will eat something soon.

The “darkened room” problem, however, is not about this general level. The argument is that if the brain is predicting its actions from moment to moment on a subconscious level, then if its main concern is getting accurate predictions, it could just predict an absence of action, and carry this out, and its predictions would be accurate. So why does this not happen? Clark gives his “hand-wave-y” answer:

Prediction-error-based neural processing is, we have seen, part of a potent recipe for multi-scale self-organization. Such multiscale self-organization does not occur in a vacuum. Instead, it operates only against the backdrop of an evolved organismic (neural and gross-bodily) form, and (as we will see in chapter 9) an equally transformative backdrop of slowly accumulated material structure and cultural practices: the socio-technological legacy of generation upon generation of human learning and experience.

To start to bring this larger picture into focus, the first point to notice is that explicit, fast timescale processes of prediction error minimization must answer to the needs and projects of evolved, embodied, and environmentally embedded agents. The very existence of such agents (see Friston, 2011b, 2012c) thus already implies a huge range of structurally implicit creature-specific ‘expectations’. Such creatures are built to seek mates, to avoid hunger and thirst, and to engage (even when not hungry and thirsty) in the kinds of sporadic environmental exploration that will help prepare them for unexpected environmental shifts, resource scarcities, new competitors, and so on. On a moment-by-moment basis, then, prediction error is minimized only against the backdrop of this complex set of creature-defining ‘expectations’.”

In one way, the answer here is a historical one. If you simply ask the abstract question, “would it minimize prediction error to predict doing nothing, and then to do nothing,” perhaps it would. But evolution could not bring such a creature into existence, while it was able to produce a creature that would predict that it would engage the world in various ways, and then would proceed to engage the world in those ways.

The objection, of course, would not be that the creature of the “darkened room” is possible. The objection would be that since such a creature is not possible, it must be wrong to describe the brain as minimizing prediction error. But notice that if you predict that you will not eat, and then you do not eat, you are no more right or wrong than if you predict that you will eat, and then you do eat. Either one is possible from the standpoint of prediction, but only one is possible from the standpoint of history.

This is where being “embodied” is relevant. The brain is not an abstract algorithm which has no content except to minimize prediction error; it is a physical object which works together in physical ways with the rest of the human body to carry out specifically human actions and to live a human life.

On the largest scale of evolutionary history, there were surely organisms that were nourished and reproduced long before there was anything analagous to a mind at work in those organisms. So when mind began to be, and took over some of this process, this could only happen in such a way that it would continue the work that was already there. A “predictive engine” could only begin to be by predicting that nourishment and reproduction would continue, since any attempt to do otherwise would necessarily result either in false predictions or in death.

This response is necessarily “hand-wave-y” in the sense that I (and presumably Clark) do not understand the precise physical implementation. But it is easy to see that it was historically necessary for things to happen this way, and it is an expression of “embodiment” in the sense that “minimize prediction error” is an abstract algorithm which does not and cannot exhaust everything which is there. The objection would be, “then there must be some other algorithm instead.” But this does not follow: no abstract algorithm will exhaust a physical object. Thus for example, animals will fall because they are heavy. Asking whether falling will satisfy some abstract algorithm is not relevant. In a similar way, animals had to be physically arranged in such a way that they would usually eat and reproduce.

I said I would make one additional point, although it may well be related to the above concern. In section 4.8 Clark notes that his account does not need to consider costs and benefits, at least directly:

But the story does not stop there. For the very same strategy here applies to the notion of desired consequences and rewards at all levels. Thus we read that ‘crucially, active inference does not invoke any “desired consequences”. It rests only on experience-dependent learning and inference: experience induces prior expectations, which guide perceptual inference and action’ (Friston, Mattout, & Kilner, 2011, p. 157). Apart from a certain efflorescence of corollary discharge, in the form of downward-flowing predictions, we here seem to confront something of a desert landscape: a world in which value functions, costs, reward signals, and perhaps even desires have been replaced by complex interacting expectations that inform perception and entrain action. But we could equally say (and I think this is the better way to express the point) that the functions of rewards and cost functions are now simply absorbed into a more complex generative model. They are implicit in our sensory (especially proprioceptive) expectations and they constrain behavior by prescribing their distinctive sensory implications.

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.

Zombies and Ignorance of the Formal Cause

Let’s look again at Robin Hanson’s account of the human mind, considered previously here.

Now what I’ve said so far is usually accepted as uncontroversial, at least when applied to the usual parts of our world, such as rivers, cars, mountains laptops, or ants. But as soon as one claims that all this applies to human minds, suddenly it gets more controversial. People often state things like this:

I am sure that I’m not just a collection of physical parts interacting, because I’m aware that I feel. I know that physical parts interacting just aren’t the kinds of things that can feel by themselves. So even though I have a physical body made of parts, and there are close correlations between my feelings and the states of my body parts, there must be something more than that to me (and others like me). So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care mainly about feelings, not physical parts interacting; we want to know what out there feels so we can know what to care about.

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

What would someone mean by making the original statement that “I know that physical parts interacting just aren’t the kinds of things that can feel by themselves”? If we give this a charitable interpretation, the meaning is that “a collection of physical parts” is something many, and so is not a suitable subject for predicates like “sees” and “understands.” Something that sees is something one, and something that understands is something one.

This however is not Robin’s interpretation. Instead, he understands it to mean that besides the physical parts, there has to be one additional part, namely one which is a part in the same sense of “part”, but which is not physical. And indeed, some tend to think this way. But this of course is not helpful, because the reason a collection of parts is not a suitable subject for seeing or understanding is not because those parts are physical, but because the subject is not something one. And this would remain even if you add a non-physical part or parts. Instead, what is needed to be such a subject is that the subject be something one, namely a living being with the sense of sight, in order to see, or one with the power of reason, for understanding.

What do you need in order to get one such subject from “a collection of parts”? Any additional part, physical or otherwise, will just make the collection bigger; it will not make the subject something one. It is rather the formal cause of a whole that makes the parts one, and this formal cause is not a part in the same sense. It is not yet another part, even a non-physical one.

Reading Robin’s discussion in this light, it is clear that he never even considers formal causes. He does not even ask whether there is such a thing. Rather, he speaks only of material and efficient causes, and appears to be entirely oblivious even to the idea of a formal cause. Thus when asking whether there is anything in addition to the “collection of parts,” he is asking whether there is any additional material cause. And naturally, nothing will have material causes other than the things it is made out of, since “what a thing is made out of” is the very meaning of a material cause.

Likewise, when he says, “Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?”, he shows in two ways his ignorance of formal causes. First, by talking about “feeling stuff,” which implies a kind of material cause. Second, when he says, “actual cause of humans making statements” he is evidently speaking about the efficient cause of people producing sounds or written words.

In both cases, formal causality is the relevant causality. There is no “feeling stuff” at all; rather, certain things are things like seeing or understanding, which are unified actions, and these are unified by their forms. Likewise, we can consider the “humans making statements” in two ways; if we simply consider the efficient causes of the sounds, one by one, you might indeed explain them as “simple parts interacting simply.” But they are not actually mere sounds; they are meaningful and express the intention and meaning of a subject. And they have meaning by reason of the forms of the action and of the subject.

In other words, the idea of the philosophical zombie is that the zombie is indeed producing mere sounds. It is not only that the zombie is not conscious, but rather that it really is just interacting parts, and the sounds it produces are just a collection of sounds. We don’t need, then, some complicated method to determine that we are not such zombies. We are by definition not zombies if we say, think, or understanding at all.

The same ignorance of the formal cause is seen in the rest of Robin’s comments:

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

Again, he is asking whether there is some additional part which has some additional efficient causality, and suggesting that this is unlikely. It is indeed unlikely, but irrelevant, because consciousness is not an additional part, but a formal way of being that a thing has. He continues:

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

First, there is no “extra feeling stuff.” There is only a way of being, namely in this case being alive and conscious. Second, there is no coincidence. Robin’s supposed coincidence is that “I am conscious” is thought to mean, “I have feeling stuff,” but the feeling stuff is not the efficient cause of my saying that I have it; instead, the efficient cause is said to be simple parts interacting simply.

Again, the mistake here is simply to completely overlook the formal cause. “I am conscious” does not mean that I have any feeling stuff; it says that I am something that perceives. Of course we can modify Robin’s question: what is the efficient cause of my saying that I am conscious? Is it the fact that I actually perceive things, or is it simple parts interacting simply? But if we think of this in relation to form, it is like asking whether the properties of a square follow from squareness, or from the properties of the parts of a square. And it is perfectly obvious that the properties of a square follow both from squareness, and from the properties of the parts of a square, without any coincidence, and without interfering with one another. In the same way, the fact that I perceive things is the efficient cause of my saying that I perceive things. But the only difference between this actual situation and a philosophical zombie is one of form, not of matter; in a corresponding zombie, “simple parts interacting simply” are the cause of its producing sounds, but it neither perceives anything nor asserts that it is conscious, since its words are meaningless.

The same basic issue, namely Robin’s lack of the concept of a formal cause, is responsible for his statements about philosophical zombies:

Carroll inspires me to try to make one point I think worth making, even if it is also ignored. My target is people who think philosophical zombies make sense. Zombies are supposedly just like real people in having the same physical brains, which arose the through the same causal history. The only difference is that while real people really “feel”, zombies do not. But since this state of “feeling” is presumed to have zero causal influence on behavior, zombies act exactly like real people, including being passionate and articulate about claiming they are not zombies. People who think they can conceive of such zombies see a “hard question” regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel. (And which other systems feel as well.)

The one point I want to make is: if zombies are conceivable, then none of us will ever have any more relevant info than we do now about which systems actually feel. Which is pretty much zero info! You will never have any info about whether you ever really felt in the past, or will ever feel in the future. No one part of your brain ever gets any info from any other part of your brain about whether it really feels.

The state of “feeling” is not presumed to have zero causal influence on behavior. It is thought to have precisely a formal influence on behavior. That is, being conscious is why the activity of the conscious person is “saying that they feel” instead of “producing random meaningless sounds that others mistakenly interpret as meaning that they feel.”

Robin is right that philosophical zombies are impossible, however, although not for the reasons that he supposes. The actual reason for this is that it is impossible for a disposed matter to be lacking its corresponding form, and the idea of a zombie is precisely the idea of humanly disposed matter lacking human form.

Regarding his point about “info,” the possession of any information at all is already a proof that one is not a zombie. Since the zombie lacks form, any correlation between one part and another in it is essentially a random material correlation, not one that contains any information. If the correlation is noticed as having any info, then the thing noticing the information, and the information itself, are things which possess form. This argument, as far as it goes, is consistent with Robin’s claim that zombies do not make sense; they do not, but not for the reasons that he posits.

Mind and Matter

In Book III of On the Soul, Aristotle argues that the intellect does not have a bodily organ:

Therefore, since everything is a possible object of thought, mind in order, as Anaxagoras says, to dominate, that is, to know, must be pure from all admixture; for the co-presence of what is alien to its nature is a hindrance and a block: it follows that it too, like the sensitive part, can have no nature of its own, other than that of having a certain capacity. Thus that in the soul which is called mind (by mind I mean that whereby the soul thinks and judges) is, before it thinks, not actually any real thing. For this reason it cannot reasonably be regarded as blended with the body: if so, it would acquire some quality, e.g. warmth or cold, or even have an organ like the sensitive faculty: as it is, it has none. It was a good idea to call the soul ‘the place of forms’, though (1) this description holds only of the intellective soul, and (2) even this is the forms only potentially, not actually.
Observation of the sense-organs and their employment reveals a distinction between the impassibility of the sensitive and that of the intellective faculty. After strong stimulation of a sense we are less able to exercise it than before, as e.g. in the case of a loud sound we cannot hear easily immediately after, or in the case of a bright colour or a powerful odour we cannot see or smell, but in the case of mind thought about an object that is highly intelligible renders it more and not less able afterwards to think objects that are less intelligible: the reason is that while the faculty of sensation is dependent upon the body, mind is separable from it.

There are two arguments here, one from the fact that the mind can understand at all, and the other from the effect of thinking about highly intelligible things.

St. Thomas explains the first argument:

The following argument may make this point clear. Anything that is in potency with respect to an object, and able to receive it into itself, is, as such, without that object; thus the pupil of the eye, being potential to colours and able to receive them, is itself colourless. But our intellect is so related to the objects it understands that it is in potency with respect to them, and capable of being affected by them (as sense is related to sensible objects). Therefore it must itself lack all those things which of its nature it understands. Since then it naturally understands all sensible and bodily things, it must be lacking in every bodily nature; just as the sense of sight, being able to know colour, lacks all colour. If sight itself had any particular colour, this colour would prevent it from seeing other colours, just as the tongue of a feverish man, being coated with a bitter moisture, cannot taste anything sweet. In the same way then, if the intellect were restricted to any particular nature, this connatural restriction would prevent it from knowing other natures. Hence he says: ‘What appeared inwardly would prevent and impede’ (its knowledge of) ‘what was without’; i.e. it would get in the way of the intellect, and veil it so to say, and prevent it from inspecting other things. He calls ‘the inwardly appearing’ whatever might be supposed to be intrinsic and co-natural to the intellect and which, so long as it ‘appeared’ therein would necessarily prevent the understanding of anything else; rather as we might say that the bitter moisture was an ‘inwardly appearing’ factor in a fevered tongue.

This is similar to St. Thomas’s suggestion elsewhere that matter and understanding are intrinsically opposed to one another. I cautioned the reader there about taking such an argument as definitive too quickly, and I would do the same here. Consider the argument about sensation: it is true enough that the pupil isn’t colored, and that perception of temperature is relative to the temperature of the organ of touch, or some aspects of it, which suggests that heat in the organ impedes the sensation of heat. On the other hand, the optic nerve and the visual cortex are arguably even more necessary to the sense of sight than the pupil, and they most certainly are not colorless. Taking this into consideration, the facts about the pupil, and the way touch functions, and so on, seem like facts that should be taken into consideration, but do not even come to close to establishing as a fact that the intellect does not have an organ.

Likewise, with the second argument, Aristotle is certainly pointing to a difference between the intellect and the senses, even if this argument might need qualification, since one does tire even of thinking. But saying that the intellect is not merely another sense is one thing, and saying that it does not have an organ at all is another.

We previously considered Sean Collins’s discussion Aristotle and the history of science. Following on one of the passages quoted in the linked post, Collins continues:

I said above that Aristotle thinks somewhat Platonically “despite himself.” He himself is very remarkably aware that matter will make a difference in the account of things, even if the extent of the difference remains as yet unknown. And Aristotle makes, in this connection, a distinction which is well known to the scholastic tradition, but not equally well understood: that, namely, between the “logical” consideration of a question, and the “physical” consideration of it. Why make that distinction? Its basis lies in the discovery that matter is a genuine principle. For, on the one hand, the mind and its act are immaterial; but the things to be known in the physical world are material. It becomes necessary, therefore, for the mind to “go out of itself,” as it were, in the effort to know things. This is precisely what gives rise to what is called the “order of concretion.”

But how much “going out of itself” will be necessary, or precisely how that is to be done, is not something that can be known without experience — the experience, as it turns out, not merely of an individual but of an entire tradition of thought. Here I am speaking of history, and history has, indeed, everything to do with what I am talking about. Aristotle’s disciples are not always as perspicacious as their master was. Some of them suppose that they should follow the master blindly in the supposition that history has no significant bearing on the “disciplines.” That supposition amounts, at least implicitly, to a still deeper assumption: the assumption, namely, that the materiality of human nature, and of the cosmos, is not so significant as to warrant a suspicion that historical time is implicated in the material essence of things. Aristotle did not think of time as essentially historical in the sense I am speaking of here. The discovery that it was essentially historical was not yet attainable.

I would argue that Sean Collins should consider how similar considerations would apply to his remark that “the mind and its act are immaterial.” Perhaps we know in a general way that sensation is more immaterial than growth, but we do not think that sensation therefore does not involve an organ. How confident should one be that the mind does not use an organ based on such general considerations? Just as there is a difference between the “logical” consideration of time and motion and their “physical” consideration, so there might be a similar difference between two kinds of consideration of the mind.

Elsewhere, Collins criticizes a certain kind of criticism of science:

We do encounter the atomists, who argue to a certain complexity in material things. Most of our sophomore year’s natural science is taken up with them. But what do we do with them? The only atomists we read are the early ones, who are only just beginning to discover evidence for atoms. The evidence they possess for atoms is still weak enough so that we often think we can take refuge in general statements about the hypothetical nature of modern science. In other words, without much consideration, we are tempted to write modern science off, so that we can get back to this thing we call philosophy.

Some may find that description a little stark, but at any rate, right here at the start, I want to note parenthetically that such a dismissal would be far less likely if we did not often confuse experimental science with the most common philosophical account of contemporary science. That most common philosophical account is based largely on the very early and incomplete developments of science, along with an offshoot of Humean philosophy which came into vogue mainly through Ernst Mach. But if we look at contemporary science as it really is today, and take care to set aside accidental associations it has with various dubious philosophies, we find a completely wonderful and astonishing growth of understanding of the physical structure not only of material substances, but of the entire cosmos. And so while some of us discuss at the lunch table whether the hypothesis of atoms is viable, physicists and engineers around the world make nanotubes and other lovely little structures, even machines, out of actual atoms of various elements such as carbon.

And likewise during such discussions, neuroscientists discuss which parts of the brain are responsible for abstract thought.

When we discussed the mixing of wine and water, we noted how many difficulties could arise when you consider a process in detail, which you might not notice simply with a general consideration. The same thing will certainly happen in the consideration of how the mind works. For example, how am I choosing these words as I type? I do not have the time to consider a vast list of alternatives for each word, even though there would frequently be several possibilities, and sometimes I do think of more than one. Other times I go back and change a word or two, or more. But most of the words are coming to me as though by magic, without any conscious thought. Where is this coming from?

The selection of these words is almost certainly being done by a part of my brain. A sign of this is that those with transcortical motor aphasia have great difficulty selecting words, but do not have a problem with understanding.

This is only one small element of a vast interconnected process which is involved in understanding, thinking, and speaking. And precisely because there is a very complex process here which is not completely understood, the statement, “well, these elements are organic, but there is also some non-organic element involved,” cannot be proved to be false in a scientific manner, at least at this time. But it also cannot be proved to be true, and if it did turn out to be true, there would have to be concrete relationships between that element and all the other elements. What would be the contribution of the immaterial element? What would happen if it were lacking, or if that question does not make sense, because it cannot be lacking, why can it not be lacking?

 

Age of Em

This is Robin Hanson’s first book. Hanson gradually introduces his topic:

You, dear reader, are special. Most humans were born before 1700. And of those born after, you are probably richer and better educated than most. Thus you and most everyone you know are special, elite members of the industrial era.

Like most of your kind, you probably feel superior to your ancestors. Oh, you don’t blame them for learning what they were taught. But you’d shudder to hear of many of your distant farmer ancestors’ habits and attitudes on sanitation, sex, marriage, gender, religion, slavery, war, bosses, inequality, nature, conformity, and family obligations. And you’d also shudder to hear of many habits and attitudes of your even more ancient forager ancestors. Yes, you admit that lacking your wealth your ancestors couldn’t copy some of your habits. Even so, you tend to think that humanity has learned that your ways are better. That is, you believe in social and moral progress.

The problem is, the future will probably hold new kinds of people. Your descendants’ habits and attitudes are likely to differ from yours by as much as yours differ from your ancestors. If you understood just how different your ancestors were, you’d realize that you should expect your descendants to seem quite strange. Historical fiction misleads you, showing your ancestors as more modern than they were. Science fiction similarly misleads you about your descendants.

As an example of the kind of past difference that Robin is discussing, even in the fairly recent past, consider this account by William Ewald of a trial from the sixteenth century:

In 1522 some rats were placed on trial before the ecclesiastical court in Autun. They were charged with a felony: specifically, the crime of having eaten and wantonly destroyed some barley crops in the jurisdiction. A formal complaint against “some rats of the diocese” was presented to the bishop’s vicar, who thereupon cited the culprits to appear on a day certain, and who appointed a local jurist, Barthelemy Chassenée (whose name is sometimes spelled Chassanée, or Chasseneux, or Chasseneuz), to defend them. Chassenée, then forty-two, was known for his learning, but not yet famous; the trial of the rats of Autun was to establish his reputation, and launch a distinguished career in the law.

When his clients failed to appear in court, Chassenée resorted to procedural arguments. His first tactic was to invoke the notion of fair process, and specifically to challenge the original writ for having failed to give the rats due notice. The defendants, he pointed out, were dispersed over a large tract of countryside, and lived in many villages; a single summons was inadequate to notify them all. Moreover, the summons was addressed only to some of the rats of the diocese; but technically it should have been addressed to them all.

Chassenée was successful in his argument, and the court ordered a second summons to be read from the pulpit of every local parish church; this second summons now correctly addressed all the local rats, without exception.

But on the appointed day the rats again failed to appear. Chassenée now made a second argument. His clients, he reminded the court, were widely dispersed; they needed to make preparations for a great migration, and those preparations would take time. The court once again conceded the reasonableness of the argument, and granted a further delay in the proceedings. When the rats a third time failed to appear, Chassenée was ready with a third argument. The first two arguments had relied on the idea of procedural fairness; the third treated the rats as a class of persons who were entitled to equal treatment under the law. He addressed the court at length, and successfully demonstrated that, if a person is cited to appear at a place to which he cannot come in safety, he may lawfully refuse to obey the writ. And a journey to court would entail serious perils for his clients. They were notoriously unpopular in the region; and furthermore they were rightly afraid of their natural enemies, the cats. Moreover (he pointed out to the court) the cats could hardly be regarded as neutral in this dispute; for they belonged to the plaintiffs. He accordingly demanded that the plaintiffs be enjoined by the court, under the threat of severe penalties, to restrain their cats, and prevent them from frightening his clients. The court again found this argument compelling; but now the plaintiffs seem to have come to the end of their patience. They demurred to the motion; the court, unable to settle on the correct period within which the rats must appear, adjourned on the question sine die, and judgment for the rats was granted by default.

Most of us would assume at once that this is all nothing but an elaborate joke; but Ewald strongly argues that it was all quite serious. This would actually be worthy of its own post, but I will leave it aside for now. In any case it illustrates the existence of extremely different attitudes even a few centuries ago.

In any event, Robin continues:

New habits and attitudes result less than you think from moral progress, and more from people adapting to new situations. So many of your descendants’ strange habits and attitudes are likely to violate your concepts of moral progress; what they do may often seem wrong. Also, you likely won’t be able to easily categorize many future ways as either good or evil; they will instead just seem weird. After all, your world hardly fits the morality tales your distant ancestors told; to them you’d just seem weird. Complex realities frustrate simple summaries, and don’t fit simple morality tales.

Many people of a more conservative temperament, such as myself, might wish to swap out “moral progress” here with “moral regress,” but the point stands in any case. This is related to our discussions of the effects of technology and truth on culture, and of the idea of irreversible changes.

Robin finally gets to the point of his book:

This book presents a concrete and plausible yet troubling view of a future full of strange behaviors and attitudes. You may have seen concrete troubling future scenarios before in science fiction. But few of those scenarios are in fact plausible; their details usually make little sense to those with expert understanding. They were designed for entertainment, not realism.

Perhaps you were told that fictional scenarios are the best we can do. If so, I aim to show that you were told wrong. My method is simple. I will start with a particular very disruptive technology often foreseen in futurism and science fiction: brain emulations, in which brains are recorded, copied, and used to make artificial “robot” minds. I will then use standard theories from many physical, human, and social sciences to describe in detail what a world with that future technology would look like.

I may be wrong about some consequences of brain emulations, and I may misapply some science. Even so, the view I offer will still show just how troublingly strange the future can be.

I greatly enjoyed Robin’s book, but unfortunately I have to admit that relatively few people will in general. It is easy enough to see the reason for this from Robin’s introduction. Who would expect to be interested? Possibly those who enjoy the “futurism and science fiction” concerning brain emulations; but if Robin does what he set out to do, those persons will find themselves strangely uninterested. As he says, science fiction is “designed for entertainment, not realism,” while he is attempting to answer the question, “What would this actually be like?” This intention is very remote from the intention of the science fiction, and consequently it will likely appeal to different people.

Whether or not Robin gets the answer to this question right, he definitely succeeds in making his approach and appeal differ from those of science fiction.

One might illustrate this with almost any random passage from the book. Here are portions of his discussion of the climate of em cities:

As we will discuss in Chapter 18, Cities section, em cities are likely to be big, dense, highly cost-effective concentrations of computer and communication hardware. How might such cities interact with their surroundings?

Today, computer and communication hardware is known for being especially temperamental about its environment. Rooms and buildings designed to house such hardware tend to be climate-controlled to ensure stable and low values of temperature, humidity, vibration, dust, and electromagnetic field intensity. Such equipment housing protects it especially well from fire, flood, and security breaches.

The simple assumption is that, compared with our cities today, em cities will also be more climate-controlled to ensure stable and low values of temperature, humidity, vibrations, dust, and electromagnetic signals. These controls may in fact become city level utilities. Large sections of cities, and perhaps entire cities, may be covered, perhaps even domed, to control humidity, dust, and vibration, with city utilities working to absorb remaining pollutants. Emissions within cities may also be strictly controlled.

However, an em city may contain temperatures, pressures, vibrations, and chemical concentrations that are toxic to ordinary humans. If so, ordinary humans are excluded from most places in em cities for safety reasons. In addition, we will see in Chapter 18, Transport section, that many em city transport facilities are unlikely to be well matched to the needs of ordinary humans.

Cities today are the roughest known kind of terrain, in the sense that cities slow down the wind the most compared with other terrain types. Cities also tend to be hotter than neighboring areas. For example, Las Vegas is 7 ° Fahrenheit hotter in the summer than are surrounding areas. This hotter city effect makes ozone pollution worse and this effect is stronger for bigger cities, in the summer, at night, with fewer clouds, and with slower wind (Arnfield 2003).

This is a mild reason to expect em cities to be hotter than other areas, especially at night and in the summer. However, as em cities are packed full of computing hardware, we shall now see that em cities will  actually be much hotter.

While the book considers a wide variety of topics, e.g. the social relationships among ems, which look quite different from the above passage, the general mode of treatment is the same. As Robin put it, he uses “standard theories” to describe the em world, much as he employs standard theories about cities, about temperature and climate, and about computing hardware in the above passage.

One might object that basically Robin is positing a particular technological change (brain emulations), but then assuming that everything else is the same, and working from there. And there is some validity to this objection. But in the end there is actually no better way to try to predict the future; despite David Hume’s opinion, generally the best way to estimate the future is to say, “Things will be pretty much the same.”

At the end of the book, Robin describes various criticisms. First are those who simply said they weren’t interested: “If we include those who declined to read my draft, the most common complaint is probably ‘who cares?'” And indeed, that is what I would expect, since as Robin remarked himself, people are interested in an entertaining account of the future, not an attempt at a detailed description of what is likely.

Others, he says, “doubt that one can ever estimate the social consequences of technologies decades in advance.” This is basically the objection I mentioned above.

He lists one objection that I am partly in agreement with:

Many doubt that brain emulations will be our next huge technology change, and aren’t interested in analyses of the consequences of any big change except the one they personally consider most likely or interesting. Many of these people expect traditional artificial intelligence, that is, hand-coded software, to achieve broad human level abilities before brain emulations appear. I think that past rates of progress in coding smart software suggest that at previous rates it will take two to four centuries to achieve broad human level abilities via this route. These critics often point to exciting recent developments, such as advances in “deep learning,” that they think make prior trends irrelevant.

I don’t think Robin is necessarily mistaken in regard to his expectations about “traditional artificial intelligence,” although he may be, and I don’t find myself uninterested by default in things that I don’t think the most likely. But I do think that traditional artificial intelligence is more likely than his scenario of brain emulations; more on this below.

There are two other likely objections that Robin does not include in this list, although he does touch on them elsewhere. First, people are likely to say that the creation of ems would be immoral, even if it is possible, and similarly that the kinds of habits and lives that he describes would themselves be immoral. On the one hand, this should not be a criticism at all, since Robin can respond that he is simply describing what he thinks is likely, not saying whether it should happen or not; on the other hand, it is in fact obvious that Robin does not have much disapproval, if any, of his scenario. The book ends in fact by calling attention to this objection:

The analysis in this book suggests that lives in the next great era may be as different from our lives as our lives are from farmers’ lives, or farmers’ lives are from foragers’ lives. Many readers of this book, living industrial era lives and sharing industrial era values, may be disturbed to see a forecast of em era descendants with choices and life styles that appear to reject many of the values that they hold dear. Such readers may be tempted to fight to prevent the em future, perhaps preferring a continuation of the industrial era. Such readers may be correct that rejecting the em future holds them true to their core values.

But I advise such readers to first try hard to see this new era in some detail from the point of view of its typical residents. See what they enjoy and what fills them with pride, and listen to their criticisms of your era and values. This book has been designed in part to assist you in such a soul-searching examination. If after reading this book, you still feel compelled to disown your em descendants, I cannot say you are wrong. My job, first and foremost, has been to help you see your descendants clearly, warts and all.

Our own discussions of the flexibility of human morality are relevant. The creatures Robin is describing are in many ways quite different from humans, and it is in fact very appropriate for their morality to differ from human morality.

A second likely objection is that Robin’s ems are simply impossible, on account of the nature of the human mind. I think that this objection is mistaken, but I will leave the details of this explanation for another time. Robin appears to agree with Sean Carroll about the nature of the mind, as can be seen for example in this post. Robin is mistaken about this, for the reasons suggested in my discussion of Carroll’s position. Part of the problem is that Robin does not seem to understand the alternative. Here is a passage from the linked post on Overcoming Bias:

Now what I’ve said so far is usually accepted as uncontroversial, at least when applied to the usual parts of our world, such as rivers, cars, mountains laptops, or ants. But as soon as one claims that all this applies to human minds, suddenly it gets more controversial. People often state things like this:

“I am sure that I’m not just a collection of physical parts interacting, because I’m aware that I feel. I know that physical parts interacting just aren’t the kinds of things that can feel by themselves. So even though I have a physical body made of parts, and there are close correlations between my feelings and the states of my body parts, there must be something more than that to me (and others like me). So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care mainly about feelings, not physical parts interacting; we want to know what out there feels so we can know what to care about.”

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

There is a false dichotomy here, and it is the same one that C.S. Lewis falls into when he says, “Either we can know nothing or thought has reasons only, and no causes.” And in general it is like the error of the pre-Socratics, that if a thing has some principles which seem sufficient, it can have no other principles, failing to see that there are several kinds of cause, and each can be complete in its own way. And perhaps I am getting ahead of myself here, since I said this discussion would be for later, but the objection that Robin’s scenario is impossible is mistaken in exactly the same way, and for the same reason: people believe that if a “materialistic” explanation could be given of human behavior in the way that Robin describes, then people do not truly reason, make choices, and so on. But this is simply to adopt the other side of the false dichotomy, much like C.S. Lewis rejects the possibility of causes for our beliefs.

One final point. I mentioned above that I see Robin’s scenario as less plausible than traditional artificial intelligence. I agree with Tyler Cowen in this post. This present post is already long enough, so again I will leave a detailed explanation for another time, but I will remark that Robin and I have a bet on the question.

Ezekiel Bulver on Descartes

C.S. Lewis writes:

In other words, you must show that a man is wrong before you start explaining why he is wrong. The modern method is to assume without discussion that he is wrong and then distract his attention from this (the only real issue) by busily explaining how he became to be so silly. In the course of the last fifteen years I have found this vice so common that I have had to invent a name for it. I call it “Bulverism.” Some day I am going the write the biography of its imaginary inventor, Ezekiel Bulver, whose destiny was determined at the age of five when he heard his mother say to his father – who had been maintaining that two sides of a triangle were together greater than the third – “Oh, you say that because you are a man.” “At that moment,” E. Bulver assures us, “there flashed across my opening mind the great truth that refutation is no necessary part of argument. Assume your opponent is wrong, and then explain his error, and the world will be at your feet. Attempt to prove that he is wrong or (worse still) try to find out whether he is wrong or right, and the national dynamism of our age will thrust you to the wall.” That is how Bulver became one of the makers of the Twentieth Century.

In the post linked above, we mainly discussed “explaining how he came to be so silly” in terms of motivations. But Ezekiel Bulver has a still more insidious way of explaining people’s mistakes. Here is his explanation of the mistakes of Descartes (fictional, of course, like the rest of Bulver’s life):

Descartes was obsessed with proving the immortality of the soul and the existence of God. This is clear enough from his own statements regarding the purpose of the MeditationsThis is why he makes, “I think, therefore I am,” the fundamental principle of his entire system. And he derives everything from this single principle.

Someone who derives everything from such a thought, of course, is almost sure to be wrong about everything, since not much can actually follow from that thought, and in any case it is fundamentally misguided to derive conclusions about the world from our ideas about knowledge, rather than deriving conclusions about knowledge from our knowledge of the world.

While Bulver includes here a reference to a motive, namely the desire to prove the immortality of the soul and the existence of God, his main argument is that Descartes is mistaken due to the flawed order of his argument.

As I suggested above, this is even more insidious than the imputation of motives. As I pointed out in the original discussion of Bulverism, having a motive for a belief does not exclude the possibility of having an argument, nor does it exclude the possibility the argument is a strong one, nor does it exclude the possibility that one’s belief is true. But in the case under consideration, Bulver is not giving a cause rather than a reason; he is saying that Descartes has reasons, but that they are necessarily flawed ones, because they do not respect the natural order of knowing. The basic principle is the same: assume that a man is wrong, and then explain how he got to be wrong. The process appears more reasonable insofar as reasons are imputed to the person, but they are more exclusive of the person’s real reasons, while motives do not exclude any reasons.

As we have seen, Bulver is mistaken about Descartes. Descartes does not actually suppose that he derives his knowledge of the world from his knowledge of thought, even if he organizes his book that way.

 

Knowing Knowing and Known

In his work On the Soul, Aristotle points out that knowledge of powers depends on the knowledge of activities, and knowledge of activities depends on knowledge of the objects of the activities:

It is necessary for the student of these forms of soul first to find a definition of each, expressive of what it is, and then to investigate its derivative properties, &c. But if we are to express what each is, viz. what the thinking power is, or the perceptive, or the nutritive, we must go farther back and first give an account of thinking or perceiving, for in the order of investigation the question of what an agent does precedes the question, what enables it to do what it does. If this is correct, we must on the same ground go yet another step farther back and have some clear view of the objects of each; thus we must start with these objects, e.g. with food, with what is perceptible, or with what is intelligible.

A little thought will establish that this is entirely necessary. In order to have a general knowledge of the power or the activity, however, it will be sufficient to have a general knowledge of the object. But given that human knowledge proceeds from the general to the specific, it would be reasonable to believe that a detailed knowledge of the power or activity might require a somewhat equally detailed knowledge of the object.

We can see how this would happen by thinking about the example of eating and food. A general idea of both eating and food might be this: eating is taking in other bodies and using them for growth and for energy for other living activities, and food is what can be taken in and used in this way.

Both the idea of eating and the idea of food here are fairly general, and due to their generality they leave open various questions. For example, why is not possible to live off air? Air is a body, and in physics it is in some sense convertible with energy, so it would not seem unreasonable if it could provide the matter and energy that we need in order to grow and to live in other ways.

The general account does not of course assert that this is a possibility, but neither does it deny the possibility. So if someone thinks that the general account tells them all that needs to be known about eating and food, they will not be unlikely to conclude that living off air should be a possibility. If someone drew this conclusion it would be an example of impatience with respect to truth. The example is not very realistic, of course, even if there are a few people who actually draw this conclusion, but this lack of realism is not because of some flaw in the idea of the knowledge of activities depending on the knowledge of objects, but just because most people already accept that air is not a kind of food, even if they do not know why it is not. So they already have a somewhat more detailed knowledge of the object, and therefore also of the activity.

Something similar will result with other powers of the soul, and with powers in general. In the case of knowledge in particular, a general knowledge of knowing will depend on a general knowledge of the known or knowable, and a detailed knowledge of knowing will depend on a detailed knowledge of the known or knowable. And just as in the example above, a general knowledge does not necessarily lead someone into error, but it can leave open questions, and one who is impatient for truth might draw detailed conclusions too soon. In this case, a likely result would be that someone confuses the mode of knowledge and the mode of the known, although this would not be the only way to fall into error.

Sean Collins discusses the history of science and its relationship with philosophy:

In my post of March 6, I noted that we must distinguish between what science has been and what it ought to be, or what it is naturally ordained to be. It is therefore a mistake to take any current or past state of science and construe that as universal without any argument. It is a mistake, for example, to suppose that the Galilean paradigm of physics as “written in mathematical terms” is a universal truth, merely on the ground that physics has been that way for some time, and indeed with some fair degree of success. Or, again, I shall argue, it is a mistake to infer that science consists essentially, and by its permanent universal nature, in reasoning from artificial “paradigms,” even if the recent history of science suggests that.

But from this one might be inclined to draw either of two diametrically opposite inferences. One would be to suppose that history and science have nothing to do with each other, otherwise than accidentally. We should therefore try to find out what science is really supposed to be, and let it be that. But the opposite conclusion seems perhaps equally justifiable: namely that science is essentially historical, so that stages in its progress are precisely stages, and therefore ought not to be confused with the universal character of science itself.

Which is the right conclusion? Should we think that science and history have any real connection? To make the question suitably concrete, we should first recognize that this is really a question about humanity. It is humanity we are wondering about when we ask whether our knowledge has any essential relation with history. It is about the being called “man” himself that we must finally ask whether there is something essentially historical.

But then we can see, perhaps, that this is no small question, and it would scarcely do it justice to propose an answer to it in a few short paragraphs. For now, I will let it suffice to have asked the question. But I would also like to take note of some significant historical facts which suggest a direction in which to seek an answer. And after that I will propose what I think is an absolutely fundamental and critical principle on the way to finding an answer.

The signs I have in mind are these. Some 2500 years ago, Aristotle wrote his Organon, which laid out the delineations of “science.” Aristotle argued that science, in the strictest sense, must be knowledge from universal causes, that these causes must be expressed in self-evident principles, and that the principles must derive from the essences of things as expressed in their definitions. Historically, that view seemed to hold a very firm sway for a very long time, until something strange happened: there was a revolt. Francis Bacon, Galileo, and Descartes were primary agents of the revolt. The revolt was in large measure a backlash against entrenched stagnation, against which irrepressible spirits finally grew indignant. From that moment on, intellectual culture became bifurcated into “science” and “philosophy,” and that bifurcation remains to this day.

Those who remain in the camp of the “philosophers” often stake their claims on the basis of the original claims of Aristotle’s Posterior Analytics. They resist the intrusions of science on the alleged ground that only philosophy proceeds in a truly universal mode, seeking definitions by “genus and difference,” aiming at truly universal causes, and proposing its theses with complete certitude. Those, on the other hand, who decide to be “scientists” stake their claims on the basis of what they take to be reality itself. They notice the truly astonishing degree to which physical reality has a structure, a structure which reaches down deeply into the materiality of things. They see, all too well, that the discovery and appreciation of that structure absolutely demands a mode of thought which is not that of conventional philosophy. And they cannot, moreover, help but notice that conventional philosophers have often tended to either be completely ignorant of that structure, or worse yet, to not care about it, or to deny that it matters, or to deny that it really exists at all.

To describe this by a succinct approximation, we might say that the philosophical mindset tries to reason from characteristics of the mind: from its yearning for what transcends the murkiness of matter. The scientific mindset, by contrast, seeks to reason from the characteristics of physical reality, even possibly at the expense of the aspirations of human reason towards what is immaterial.

While one might agree that it does not “do justice” to the question in the sense of discussing it adequately, we can see from what has been said that one cannot fully separate science from history. If we ask, “what is the nature of science,” we are asking about the nature of human knowing. In order to answer this, we require knowledge of knowing. But since knowing knowing in a detailed way depends on knowing the known in a detailed way, the question of whether history is essentially involved in knowing knowing depends on whether knowing the known in a detailed way is an essentially historical process.

Human beings and especially the life of an individual human being are very small parts of reality. One consequence is that the single lifetime of an individual is not enough to come to a detailed knowledge of the physical world without an essential dependence on the work of others, which dependence implies a historical process. The “astonishing degree to which physical reality has a structure” is something that necessarily takes many human lifetimes to come to know, and is the collective work of the human race, not the work of an individual.

Speaking of Aristotle’s attitude towards matter in science, Collins says:

Aristotle, as I have noted, saw that materiality is a true principle of natural being. Henceforth, it was no longer necessary to shun matter, as the Platonists had, as if it were repugnant to philosophical endeavors. One could reason about the natural, physical world, and one could reason about it as physical.

Yet we are — no doubt inevitably — slow to grasp the implications of materiality. Even about this very historical fact, many of us tend to think in a quasi-Platonic way. And what I am about to assert will no doubt astonish some readers: even Aristotle himself continued to think in a somewhat Platonic way, despite his recognition of the principle of materiality. But anyone who is acquainted with the history of thought shouldn’t be entirely surprised at my assertion. It is common — ordinary, in fact — for great thinkers, who find themselves at the dawn of a new and fuller vision of the order of things, to have one foot remaining in the older vision, not entirely able to see the implications of their own new intuitions. If one assumes that thought ought to evolve, as opposed to merely changing in revolutionary fits, one should find this even perhaps a little reassuring.

So what do I mean when I say that Aristotle thinks in a semi-Platonic way? Briefly, I mean that, even despite himself in a way, he continues to seek the accounts, the logoi, of things in a way that would place them more in the order of the purely intellectual than the physical. For example, he seeks to understand what time is in a way that makes virtually no appeal to such physical evidence as we have nowadays through physical experimentation. (Of course! How could he make appeal to something that didn’t exist yet?) He supposes, rather inevitably, that simply thinking about time and motion from the relatively deficient point of view of something called  “common experience” will give him something like a sufficient account of what he is trying to understand. And in the end, his vision of an eternal first motion as the source of time, a motion perfectly circular and unchanging, deriving from the causality of a First Mover who could not directly be the source of any contingent effects — this is a vision which now, from the point of view of contemporary science as well as Christian theology, rightly strikes us as not yet a mature vision in its understanding of the role of matter in the order of the cosmos.

This, to be sure, is not a criticism of Aristotle, as if to suggest that he should have thought something else; rather, it is merely an observation of how human thought inevitably takes time to  develop. Nor do I mean to suggest that what Aristotle saw was of negligible account. It belongs precisely to what I am calling the order of concretion to begin with the relatively abstract in our understanding of material things, and this is because matter is ordered to form more than vice versa. This can be illustrated in the design of artifacts, for in them also there is always a material and a formal element. Thus, for example, barring special circumstances, one does not ordinarily design a building by first looking at what materials to use; rather one considers what form and function they are to serve, and then one chooses materials accordingly. Though there are circumstantial exceptions to this principle, it remains a principle; and it is clear enough that a systematic disregard of it would make our thought chaotic.

Thus one can see that there is a philosophical justification for doing what Aristotle did. We might describe this justification in another way as well: it derives from the fact that the human mind must bear some proportion to the reality it is to know. For having understood something of the difference between the order of intellect and the order of physical being, we still suppose, rightly, that there must be a proportion between them. Yet this rather abstract statement leaves much in doubt. How is the human mind to fulfill its destiny to know physical reality? I shall trust my readers to be able to understand that the answer to that question could not look the same in the 4th century BC as it looks now….

It is possible that Collins is too generous to Aristotle here, perhaps for the sake of his readers and for the sake of his own intellectual tradition, in the sense that to some extent, it seems likely that some of Aristotle’s conclusions are “impatient” in the way we have discussed earlier. Nonetheless his basic assertion is correct. Knowing the nature of knowledge in detail requires more knowledge of the knowable thing than Aristotle could have had at the time. As Collins says, this is “merely an observation of how human thought inevitably takes time to develop.” And even if there is some intellectual impatience there, there is perhaps no more such impatience than is generally found in those who seek to understand reality.