Consistency and Reality

Consistency and inconsistency, in their logical sense, are relationships between statements or between the parts of a statement. They are not properties of reality as such.

“Wait,” you will say. “If consistency is not a property of reality, then you are implying that reality is not consistent. So reality is inconsistent?”

Not at all. Consistency and inconsistency are contraries, not contradictories, and they are properties of statements. So reality as such is neither consistent nor inconsistent, in the same way that sounds are neither white nor black.

We can however speak of consistency with respect to reality in an extended sense, just as we can speak of truth with respect to reality in an extended sense, even though truth refers first to things that are said or thought. In this way we can say that a thing is true insofar as it is capable of being known, and similarly we might say that reality is consistent, insofar as it is capable of being known by consistent claims, and incapable of being known by inconsistent claims. And reality indeed seems consistent in this way: I might know the weather if I say “it is raining,” or if I say, “it is not raining,” depending on conditions, but to say “it is both raining and not raining in the same way” is not a way of knowing the weather.

Consider the last point more precisely. Why can’t we use such statements to understand the world? The statement about the weather is rather different from statements like, “The normal color of the sky is not blue but rather green.” We know what it would be like for this to be the case. For example, we know what we would expect if it were the case. It cannot be used to understand the world in fact, because these expectations fail. But if they did not, we could use it to understand the world. Now consider instead the statement, “The sky is both blue and not blue in exactly the same way.” There is now no way to describe the expectations we would have if this were the case. It is not that we understand the situation and know that it does not apply, as with the claim about the color of the sky: rather, the situation described cannot be understood. It is literally unintelligible.

This also explains why we should not think of consistency as a property of reality in a primary sense. If it were, it would be like the color blue as a property of the sky. The sky is in fact blue, but we know what it would be like for it to be otherwise. We cannot equally say, “reality is in fact consistent, but we know what it would be like for it to be inconsistent.” Instead, the supposedly inconsistent situation is a situation that cannot be understood in the first place. Reality is thus consistent not in the primary sense but in a secondary sense, namely that it is rightly understood by consistent things.

But this also implies that we cannot push the secondary consistency of reality too far, in several ways and for several reasons.

First, while inconsistency as such does not contribute to our understanding of the world, a concrete inconsistent set of claims can help us understand the world, and in many situations better than any particular consistent set of claims that we might currently come up with. This was discussed in a previous post on consistency.

Second, we might respond to the above by pointing out that it is always possible in principle to formulate a consistent explanation of things which would be better than the inconsistent one. We might not currently be able to arrive at the consistent explanation, but it must exist.

But even this needs to be understood in a somewhat limited way. Any consistent explanation of things will necessarily be incomplete, which means that more complete explanations, whether consistent or inconsistent, will be possible. Consider for example these recent remarks of James Chastek on Gödel’s theorem:

1.) Given any formal system, let proposition (P) be this formula is unprovable in the system

2.) If P is provable, a contradiction occurs.

3.) Therefore, P is known to be unprovable.

4.) If P is known to be unprovable it is known to be true.

5.) Therefore, P is (a) unprovable in a system and (b) known to be true.

In the article linked by Chastek, John Lucas argues that this is a proof that the human mind is not a “mechanism,” since we can know to be true something that the mechanism will not able to prove.

But consider what happens if we simply take the “formal system” to be you, and “this formula is unprovable in the system” to mean “you cannot prove this statement to be true.” Is it true, or not? And can you prove it?

If you say that it is true but that you cannot prove it, the question is how you know that it is true. If you know by the above reasoning, then you have a syllogistic proof that it is true, and so it is false that you cannot prove it, and so it is false.

If you say that it is false, then you cannot prove it, because false things cannot be proven, and so it is true.

It is evident here that you can give no consistent response that you can know to be true; “it is true but I cannot know it to be true,” may be consistent, but obviously if it is true, you cannot know it to be true, and if it is false, you cannot know it to be true. What is really proven by Gödel’s theorem is not that the mind is not a “mechanism,” whatever that might be, but that any consistent account of arithmetic must be incomplete. And if any consistent account of arithmetic alone is incomplete, much  more must any consistent explanation of reality as a whole be incomplete. And among more complete explanations, there will be some inconsistent ones as well as consistent ones. Thus you might well improve any particular inconsistent position by adopting a consistent one, but you might again improve any particular consistent position by adopting an inconsistent one which is more complete.

The above has some relation to our discussion of the Liar Paradox. Someone might be tempted to give the same response to “tonk” and to “true”:

The problem with “tonk” is that it is defined in such a way as to have inconsistent implications. So the right answer is to abolish it. Just do not use that word. In the same way, “true” is defined in such a way that it has inconsistent implications. So the right answer is to abolish it. Just do not use that word.

We can in fact avoid drawing inconsistent conclusions using this method. The problem with the method is obvious, however. The word “tonk” does not actually exist, so there is no problem with abolishing it. It never contributed to our understanding of the world in the first place. But the word “true” does exist, and it contributes to our understanding of the world. To abolish it, then, would remove some inconsistency, but it would also remove part of our understanding of the world. We would be adopting a less complete but more consistent understanding of things.

Hilary Lawson discusses this response in Closure: A Story of Everything:

Russell and Tarski’s solution to self-referential paradox succeeds only by arbitrarily outlawing the paradox and thus provides no solution at all.

Some have claimed to have a formal, logical, solution to the paradoxes of self-reference. Since if these were successful the problems associated with the contemporary predicament and the Great Project could be solved forthwith, it is important to briefly examine them before proceeding further. The argument I shall put forward aims to demonstrate that these theories offer no satisfactory solution to the problem, and that they only appear to do so by obscuring the fact that they have defined their terms in such a way that the paradox is not so much avoided as outlawed.

The problems of self-reference that we have identified are analogous to the ancient liar paradox. The ancient liar paradox stated that ‘All Cretans are liars’ but was itself uttered by a Cretan thus making its meaning undecidable. A modern equivalent of this ancient paradox would be ‘This sentence is not true’, and the more general claim that we have already encountered: ‘there is no truth’. In each case the application of the claim to itself results in paradox.

The supposed solutions, Lawson says, are like the one suggested above: “Just do not use that word.” Thus he remarks on Tarski’s proposal:

Adopting Tarski’s hierarchy of languages one can formulate sentences that have the appearance of being self-referential. For example, a Tarskian version of ‘This sentence is not true’ would be:

(I) The sentence (I) is not true-in-L.

So Tarski’s argument runs, this sentence is both a true sentence of the language meta-L, and false in the language L, because it refers to itself and is therefore, according to the rules of Tarski’s logic and the hierarchy of languages, not properly formed. The hierarchy of languages apparently therefore enables self-referential sentences but avoids paradox.

More careful inspection however shows the manoeuvre to be engaged in a sleight of hand for the sentence as constructed only appears to be self-referential. It is a true sentence of the meta-language that makes an assertion of a sentence in L, but these are two different sentences – although they have superficially the same form. What makes them different is that the meaning of the predicate ‘is not true’ is different in each case. In the meta-language it applies the meta-language predicate ‘true’ to the object language, while in the object language it is not a predicate at all. As a consequence the sentence is not self-referential. Another way of expressing this point would be to consider the sentence in the meta-language. The sentence purports to be a true sentence in the meta-language, and applies the predicate ‘is not true’ to a sentence in L, not to a sentence in meta-L. Yet what is this sentence in L? It cannot be the same sentence for this is expressed in meta-L. The evasion becomes more apparent if we revise the example so that the sentence is more explicitly self-referential:

(I) The sentence (I) is not true-in-this-language.

Tarski’s proposal that no language is allowed to contain its own truth-predicate is precisely designed to make this example impossible. The hierarchy of languages succeeds therefore only by providing an account of truth which makes genuine self-reference impossible. It can hardly be regarded therefore as a solution to the paradox of self-reference, since if all that was required to solve the paradox was to ban it, this could have been done at the outset.

Someone might be tempted to conclude that we should say that reality is inconsistent after all. Since any consistent account of reality is incomplete, it must be that the complete account of reality is inconsistent: and so someone who understood reality completely, would do so by means of an inconsistent theory. And just as we said that reality is consistent, in a secondary sense, insofar as it is understood by consistent things, so in that situation, one would say that reality is inconsistent, in a secondary sense, because it is understood by inconsistent things.

The problem with this is that it falsely assumes that a complete and intelligible account of reality is possible. This is not possible largely for the same reasons that there cannot be a list of all true statements. And although we might understand things through an account which is in fact inconsistent, the inconsistency itself contributes nothing to our understanding, because the inconsistency is in itself unintelligible, just as we said about the statement that the sky is both blue and not blue in the same way.

We might ask whether we can at least give a consistent account superior to an account which includes the inconsistencies resulting from the use of “truth.” This might very well be possible, but it appears to me that no one has actually done so. This is actually one of Lawson’s intentions with his book, but I would assert that his project fails overall, despite potentially making some real contributions. The reader is nonetheless welcome to investigate for themselves.

Advertisements

Hard Problem of Consciousness

We have touched on this in various places, and in particular in this discussion of zombies, but we are now in a position to give a more precise answer.

Bill Vallicella has a discussion of Thomas Nagel on this issue:

Nagel replies in the pages of NYRB (8 June 2017; HT: Dave Lull) to one Roy Black, a professor of bioengineering:

The mind-body problem that exercises both Daniel Dennett and me is a problem about what experience is, not how it is caused. The difficulty is that conscious experience has an essentially subjective character—what it is like for its subject, from the inside—that purely physical processes do not share. Physical concepts describe the world as it is in itself, and not for any conscious subject. That includes dark energy, the strong force, and the development of an organism from the egg, to cite Black’s examples. But if subjective experience is not an illusion, the real world includes more than can be described in this way.

I agree with Black that “we need to determine what ‘thing,’ what activity of neurons beyond activating other neurons, was amplified to the point that consciousness arose.” But I believe this will require that we attribute to neurons, and perhaps to still more basic physical things and processes, some properties that in the right combination are capable of constituting subjects of experience like ourselves, to whom sunsets and chocolate and violins look and taste and sound as they do. These, if they are ever discovered, will not be physical properties, because physical properties, however sophisticated and complex, characterize only the order of the world extended in space and time, not how things appear from any particular point of view.

The problem might be condensed into an aporetic triad:

1) Conscious experience is not an illusion.

2) Conscious experience has an essentially subjective character that purely physical processes do not share.

3) The only acceptable explanation of conscious experience is in terms of physical properties alone.

Take a little time to savor this problem. Note first that the three propositions are collectively inconsistent: they cannot all be true.  Any two limbs entail the negation of the remaining one. Note second that each limb exerts a strong pull on our acceptance.  But we cannot accept them all because they are logically incompatible.

Which proposition should we reject? Dennett, I take it, would reject (1). But that’s a lunatic solution as Professor Black seems to appreciate, though he puts the point more politely. When I call Dennett a sophist, as I have on several occasions, I am not abusing him; I am underscoring what is obvious, namely, that the smell of cooked onions, for example, is a genuine datum of experience, and that such phenomenological data trump scientistic theories.

Sophistry aside, we either reject (2) or we reject (3).  Nagel and I accept (1) and (2) and reject (3). Black, and others of the scientistic stripe, accept (1) and (3) and reject (2).

In order to see the answer to this, we can construct a Parmenidean parallel to Vallicella’s aporetic triad:

1) Distinction is not an illusion.

2) Being has an essentially objective character of actually being that distinction does not share (considering that distinction consists in the fact of not being something.)

3) The only acceptable explanation of distinction is in terms of being alone (since there is nothing but being to explain things with.)

Parmenides rejects (1) here. What approach would Vallicella take? If he wishes to take a similarly analogous approach, he should accept (1) and (2), and deny (3). And this would be a pretty commonsense approach, and perhaps the one that most people implicitly adopt if they ever think about the problem.

At the same time, it is easy to see that (3) is approximately just as obviously true as (1); and it is for this reason that Parmenides sees rejecting (1) and accepting (2) and (3) as reasonable.

The correct answer, of course, is that the three are not inconsistent despite appearances. In fact, we have effectively answered this in recent posts. Distinction is not an illusion, but a way that we understand things, as such. And being a way of understanding, it is not (as such) a way of being mistaken, and thus it is not an illusion, and thus the first point is correct. Again, being a way of understanding, it is not a way of being as such, and thus the second point is correct. And yet distinction can be explained by being, since there is something (namely relationship) which explains why it is reasonable to think in terms of distinctions.

Vallicella’s triad mentions “purely physical processes” and “physical properties,” but the idea of “physical” here is a distraction, and is not really relevant to the problem. Consider the following from another post by Vallicella:

If I understand Galen Strawson’s view, it is the first.  Conscious experience is fully real but wholly material in nature despite the fact that on current physics we cannot account for its reality: we cannot understand how it is possible for qualia and thoughts to be wholly material.   Here is a characteristic passage from Strawson:

Serious materialists have to be outright realists about the experiential. So they are obliged to hold that experiential phenomena just are physical phenomena, although current physics cannot account for them.  As an acting materialist, I accept this, and assume that experiential phenomena are “based in” or “realized in” the brain (to stick to the human case).  But this assumption does not solve any problems for materialists.  Instead it obliges them to admit ignorance of the nature of the physical, to admit that they don’t have a fully adequate idea of what the physical is, and hence of what the brain is.  (“The Experiential and the Non-Experiential” in Warner and Szubka, p. 77)

Strawson and I agree on two important points.  One is that what he calls experiential phenomena are as real as anything and cannot be eliminated or reduced to anything non-experiential. Dennett denied! The other is that there is no accounting for experiential items in terms of current physics.

I disagree on whether his mysterian solution is a genuine solution to the problem. What he is saying is that, given the obvious reality of conscious states, and given the truth of naturalism, experiential phenomena must be material in nature, and that this is so whether or not we are able to understand how it could be so.  At present we cannot understand how it could be so. It is at present a mystery. But the mystery will dissipate when we have a better understanding of matter.

This strikes me as bluster.

An experiential item such as a twinge of pain or a rush of elation is essentially subjective; it is something whose appearing just is its reality.  For qualia, esse = percipi.  If I am told that someday items like this will be exhaustively understood from a third-person point of view as objects of physics, I have no idea what this means.  The notion strikes me as absurd.  We are being told in effect that what is essentially subjective will one day be exhaustively understood as both essentially subjective and wholly objective.  And that makes no sense. If you tell me that understanding in physics need not be objectifying understanding, I don’t know what that means either.

Here Vallicella uses the word “material,” which is presumably equivalent to “physical” in the above discussion. But it is easy to see here that being material is not the problem: being objective is the problem. Material things are objective, and Vallicella sees an irreducible opposition between being objective and being subjective. In a similar way, we can reformulate Vallicella’s original triad so that it does not refer to being physical:

1) Conscious experience is not an illusion.

2) Conscious experience has an essentially subjective character that purely objective processes do not share.

3) The only acceptable explanation of conscious experience is in terms of objective properties alone.

It is easy to see that this formulation is the real source of the problem. And while Vallicella would probably deny (3) even in this formulation, it is easy to see why people would want to accept (3). “Real things are objective,” they will say. If you want to explain anything, you should explain it using real things, and therefore objective things.

The parallel with the Parmenidean problem is evident. We would want to explain distinction in terms of being, since there isn’t anything else, and yet this seems impossible, so one (e.g. Parmenides) is tempted to deny the existence of distinction. In the same way, we would want to explain subjective experience in terms of objective facts, since there isn’t anything else, and yet this seems impossible, so one (e.g. Dennett) is tempted to deny the existence of subjective experience.

Just as the problem is parallel, the correct solution will be almost entirely parallel to the solution to the problem of Parmenides.

1) Conscious experience is not an illusion. It is a way of perceiving the world, not a way of not perceiving the world, and definitely not a way of not perceiving at all.

2) Consciousness is subjective, that is, it is a way that an individual perceives the world, not a way that things are as such, and thus not an “objective fact” in the sense that “the way things are” is objective.

3) The “way things are”, namely the objective facts, are sufficient to explain why individuals perceive the world. Consider again this post, responding to a post by Robin Hanson. We could reformulate his criticism to express instead Parmenides’s criticism of common sense (changed parts in italics):

People often state things like this:

I am sure that there is not just being, because I’m aware that some things are not other things. I know that being just isn’t non-being. So even though there is being, there must be something more than that to reality. So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care about distinctions, not just being; we want to know what out there is distinct from which other things.

But consider a key question: Does this other distinction stuff interact with the parts of our world that actually exist strongly and reliably enough to usually be the actual cause of humans making statements of distinction like this?

If yes, this is a remarkably strong interaction, making it quite surprising that philosophers, possibly excepting Duns Scotus, have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite understandable with existing philosophy. Any interaction not so understandable would have be vastly more difficult to understand than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will understand such an interaction.

But if no, if this interaction isn’t strong enough to explain human claims of distinction, then we have a remarkable coincidence to explain. Somehow this extra distinction stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that distinction stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if distinction stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that distinction stuff actually exists? Such a coincidence seems too remarkable to be believed.

“Distinction stuff”, of course, does not exist, and neither does “feeling stuff.” But some things are distinct from others. Saying this is a way of understanding the world, and it is a reasonable way to understand the world because things exist relative to one another. And just as one thing is distinct from another, people have experiences. Those experiences are ways of knowing the world (broadly understood.) And just as reality is sufficient to explain distinction, so reality is sufficient to explain the fact that people have experiences.

How exactly does this answer the objection about interaction? In the case of distinction, the fact that “one thing is not another” is never the direct cause of anything, not even of the fact that “someone believes that one thing is not another.” So there would seem to be a “remarkable coincidence” here, or we would have to say that since the fact seems unrelated to the opinion, there is no reason to believe people are right when they make distinctions.

The answer in the case of distinction is that one thing is related to another, and this fact is the cause of someone believing that one thing is not another. There is no coincidence, and no reason to believe that people are mistaken when they make distinctions, despite the fact that distinction as such causes nothing.

In a similar way, “a human being is what it is,” and “a human being does what it does” (taken in an objective sense), cause human beings to say and believe that they have subjective experience (taking saying and believing to refer to objective facts.) But this is precisely where the zombie question arises: they say and believe that they have subjective experience, when we interpret say and believe in the objective sense. But do they actually say and believe anything, considering saying and believing as including the subjective factor? Namely, when a non-zombie says something, it subjectively understands the meaning of what it is saying, and when it consciously believes something, it has a subjective experience of doing that, but these things would not apply to a zombie.

But notice that we can raise a similar question about zombie distinctions. When someone says and believes that one thing is not another, objective reality is similarly the cause of them making the distinction. But is the one thing actually not the other? But there is no question at all here except of whether the person’s statement is true or false. And indeed, someone can say, e.g, “The person who came yesterday is not the person who came today,” and this can sometimes be false. In a similar way, asking whether an apparent person is a zombie or not is just asking whether their claim is true or false when they say they have a subjective experience. The difference is that if the (objective) claim is false, then there is no claim at all in the subjective sense of “subjectively claiming something.” It is a contradiction to subjectively make the false claim that you are subjectively claiming something, and thus, this cannot happen.

Someone may insist: you yourself, when you subjectively claim something, cannot be mistaken for the above reason. But you have no way to know whether someone else who apparently is making that claim, is actually making the claim subjectively or not. This is the reason there is a hard problem.

How do we investigate the case of distinction? If we want to determine whether the person who came yesterday is not the person who came today, we do that by looking at reality, despite the fact that distinction as such is not a part of reality as such. If the person who came yesterday is now, today, a mile away from the person who came today, this gives us plenty of reason to say that the one person is not the other. There is nothing strange, however, in the fact that there is no infallible method to prove conclusively, once and for all, that one thing is definitely not another thing. There is not therefore some special “hard problem of distinction.” This is just a result of the fact that our knowledge in general is not infallible.

In a similar way, if we want to investigate whether something has subjective experience or not, we can do that only by looking at reality: what is this thing, and what does it do? Then suppose it makes an apparent claim that it has subjective experience. Obviously, for the above reasons, this cannot be a subjective claim but false: so the question is whether it makes a subjective claim and is right, or rather makes no subjective claim at all. How would you answer this as an external observer?

In the case of distinction, the fact that someone claims that one thing is distinct from another is caused by reality, whether the claim is true or false. So whether it is true or false depends on the way that it is caused by reality. In a similar way, the thing which apparently and objectively claims to possess subjective experience, is caused to do so by objective facts. Again, as in the case of distinction, whether it is true or false will depend on the way that it is caused to do so by objective facts.

We can give some obvious examples:

“This thing claims to possess subjective experience because it is a human being and does what humans normally do.” In this case, the objective and subjective claim is true, and is caused in the right way by objective facts.

“This thing claims to possess subjective experience because it is a very simple computer given a very simple program to output ‘I have subjective experience’ on its screen.” In this case the external claim is false, and it is caused in the wrong way by objective facts, and there is no subjective claim at all.

But how do you know for sure, someone will object. Perhaps the computer really is conscious, and perhaps the apparent human is a zombie. But we could similarly ask how we can know for sure that the person who came yesterday isn’t the same person who came today, even though they appear distant from each other, because perhaps the person is bilocating?

It would be mostly wrong to describe this situation by saying “there really is no hard problem of consciousness,” as Robin Hanson appears to do when he says, “People who think they can conceive of such zombies see a ‘hard question’ regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel.” The implication seems to be that there is no hard question at all. But there is, and the fact that people engage in this discussion proves the existence of the question. Rather, we should say that the question is answerable, and that one it has been answered the remaining questions are “hard” only in the sense that it is hard to understand the world in general. The question is hard in exactly the way the question of Parmenides is hard: “How is it possible for one thing not to be another, when there is only being?” The question of consciousness is similar: “How is it possible for something to have subjective experience, when there are only objective things?” And the question can and should be answered in a similar fashion.

It would be virtually impossible to address every related issue in a simple blog post of this form, so I will simply mention some things that I have mainly set aside here:

1) The issue of formal causes, discussed more in my earlier treatment of this issue. This is relevant because “is this a zombie?” is in effect equivalent to asking whether the thing lacks a formal cause. This is worthy of a great deal of consideration and would go far beyond either this post or the earlier one.

2) The issue of “physical” and “material.” As I stated in this post, this is mainly a distraction. Most of the time, the real question is how the subjective is possible given that we believe that the world is objective. The only relevance of “matter” here is that it is obvious that a material thing is an objective thing. But of course, an immaterial thing would also have to be objective in order to be a thing at all. Aristotle and many philosophers of his school make the specific argument that the human mind does not have an organ, but such arguments are highly questionable, and in my view fundamentally flawed. My earlier posts suffice to call such a conclusion into question, but do not attempt to disprove it, and the the topic would be worthy of additional consideration.

3) Specific questions about “what, exactly, would actually be conscious?” Now neglecting such questions might seem to be a cop-out, since isn’t this what the whole problem was supposed to be in the first place? But in a sense we did answer it. Take an apparent claim of something to be conscious. The question would be this: “Given how it was caused by objective facts to make that claim, would it be a reasonable claim for a subjective claimer to make?” In other words, we cannot assume in advance that it is subjectively making a claim, but if it would be a reasonable claim, it will (in general) be a true one, and therefore also a subjective one, for the same reason that we (in general) make true claims when we reasonably claim that one thing is not another. We have not answered this question only in the same sense that we have not exhaustively explained which things are distinct from which other things, and how one would know. But the question, e.g., “when if ever would you consider an artificial intelligence to be conscious?” is in itself also worthy of direct discussion.

4) The issue of vagueness. This issue in particular will cause some people to object to my answer here. Thus Alexander Pruss brings this up in a discussion of whether a computer could be conscious:

Now, intelligence could plausibly be a vague property. But it is not plausible that consciousness is a vague property. So, there must be some precise transition point in reliability needed for computation to yield consciousness, so that a slight decrease in reliability—even when the actual functioning is unchanged (remember that the Ci are all functioning in the same way)—will remove consciousness.

I responded in the comments there:

The transition between being conscious and not being conscious that happens when you fall asleep seems pretty vague. I don’t see why you find it implausible that “being conscious” could be vague in much the same way “being red” or “being intelligent” might be vague. In fact the evidence from experience (falling asleep etc) seems to directly suggest that it is vague.

Pruss responds:

When I fall asleep, I may become conscious of less and less. But I can’t get myself to deny that either it is definitely true at any given time that I am at least a little conscious or it is definitely true that I am not at all conscious.

But we cannot trust Pruss’s intuitions about what can be vague or otherwise. Pruss claims in an earlier post that there is necessarily a sharp transition between someone’s not being old and someone’s being old. I discussed that post here. This is so obviously false that it gives us a reason in general not to trust Alexander Pruss on the issue of sharp transitions and vagueness. The source of this particular intuition may be the fact that you cannot subjectively make a claim, even vaguely, without some subjective experience, as well as his general impression that vagueness violates the principles of excluded middle and non-contradiction. But in a similar way, you cannot be vaguely old without being somewhat old. This does not mean that there is a sharp transition from not being old to being old, and likewise it does not necessarily mean that there is a sharp transition from not having subjective experience to having it.

While I have discussed the issue of vagueness elsewhere on this blog, this will probably continue to be a reoccurring feature, if only because of those who cannot accept this feature of reality and insist, in effect, on “this or nothing.”

Being and Unity II

Content warning: very obscure.

This post follows up on an earlier post on this topic, as well on what was recently said about real distinction. In the latter post, we applied the distinction between the way a thing is and the way it is known in order to better understand distinction itself. We can obtain a better understanding of unity in a similar way.

As was said in the earlier post on unity, to say that something is “one” does not add anything real to the being of the thing, but it adds the denial of the division between distinct things. The single apple is not “an apple and an orange,” which are divided insofar as they are distinct from one another.

But being distinct from divided things is itself a certain way of being distinct, and consequently all that was said about distinction in general will apply to this way of being distinct as well. In particular, since being distinct means not being something, which is a way that things are understood rather than a way that they are (considered precisely as a way of being), the same thing applies to unity. To say that something is one does not add something to the way that it is, but it adds something to the way that it is understood. This way of being understood is founded, we argued, on existing relationships.

We should avoid two errors here, both of which would be expressions of the Kantian error:

First, the argument here does not mean that a thing is not truly one thing, just as the earlier discussion does not imply that it is false that a chair is not a desk. On the contrary, a chair is in fact not a desk, and a chair is in fact one chair. But when we say or think, “a chair is not a desk,” or “a chair is one chair,” we are saying these things in some way of saying, and thinking them in some way of thinking, and these ways of saying and thinking are not ways of being as such. This in no way implies that the statements themselves are false, just as “the apple seems to be red,” does not imply that the apple is not red. Arguing that the fact of a specific way of understanding implies that the thing is falsely understood would be the position described by Ayn Rand as asserting, “man is blind, because he has eyes—deaf, because he has ears—deluded, because he has a mind—and the things he perceives do not exist, because he perceives them.”

Second, the argument does not imply that the way things really are is unknown and inaccessible to us. One might suppose that this follows, since distinction cannot exist apart from someone’s way of understanding, and at the same time no one can understand without making distinctions. Consequently, someone might argue, there must be some “way things really are in themselves,” which does not include distinction or unity, but which cannot be understood. But this is just a different way of falling into the first error above. There is indeed a way things are, and it is generally not inaccessible to us. In fact, as I pointed out earlier, it would be a contradiction to assert the existence of anything entirely unknowable to us.

Our discussion, being in human language and human thought, naturally uses the proper modes of language and thought. And just as in Mary’s room, where her former knowledge of color is a way of knowing and not a way of sensing, so our discussion advances by ways of discussion, not by ways of being as such. This does not prevent the way things are from being an object of discussion, just as color can be an object of knowledge.

Having avoided these errors, someone might say that nothing of consequence follows from this account. But this would be a mistake. It follows from the present account that when we ask questions like, “How many things are here?”, we are not asking a question purely about how things are, but to some extent about how we should understand them. And even when there is a single way that things are, there is usually not only one way to understand them correctly, but many ways.

Consider some particular question of this kind: “How many things are in this room?” People might answer this question in various ways. John Nerst, in a previous discussion on this blog, seemed to suggest that the answer should be found by counting fundamental particles. Alexander Pruss would give a more complicated answer, since he suggests that large objects like humans and animals should be counted as wholes (while also wishing to deny the existence of parts, which would actually eliminate the notion of a whole), while in other cases he might agree to counting particles. Thus a human being and an armchair might be counted, more or less, as 1 + 10^28 things, namely counting the human being as one thing and the chair as a number of particles.

But if we understand that the question is not, and cannot be, purely about how things are, but is also a question about how things should be understood, then both of the above responses seem unreasonable: they are both relatively bad ways of understanding the things in the room, even if they both have some truth as well. And on the other hand, it is easy to see that “it depends on how you count,” is part of the answer. There is not one true answer to the question, but many true answers that touch on different aspects of the reality in the room.

From the discussion with John Nerst, consider this comment:

My central contention is that the rules that define the universe runs by themselves, and must therefore be self-contained, i.e not need any interpretation or operationalization from outside the system. As I think I said in one of the parts of “Erisology of Self and Will” that the universe must be an automaton, or controlled by an automaton, etc. Formal rules at the bottom.

This is isn’t convincing to you I guess but I suppose I rule out fundamental vagueness because vagueness implies complexity and fundamental complexity is a contradiction in terms. If you keep zooming in on a fuzzy picture you must, at some point, come down to sharply delineated pixels.

Among other things, the argument of the present post shows why this cannot be right. “Sharply delineated pixels” includes the distinction of one pixel from another, and therefore includes something which is a way of understanding as such, not a way of being as such. In other words, while intending to find what is really there, apart from any interpretation, Nerst is directly including a human interpretation in his account. And in fact it is perfectly obvious that anything else is impossible, since any account of reality given by us will be a human account and will thus include a human way of understanding. Things are a certain way: but that way cannot be said or thought except by using ways of speaking or thinking.

Real Distinction II

I noted recently that one reason why people might be uncomfortable with distinguishing between the way things seem, as such, namely as a way of seeming, and the way things are, as such, namely as a way of being, is that it seems to introduce an explanatory gap. In the last post, why did Mary have a “bluish” experience? “Because the banana was blue,” is true, but insufficient, since animals with different sense organs might well have a different experience when they see blue things. And this gap seems very hard to overcome, possibly even insurmountable.

However, the discussion in the last post suggests that the difficulty in overcoming this gap is mainly the result of the fact that no one actually knows the full explanation, and that the full explanation would be extremely complicated. It might even be so complicated that no human being could understand it, not necessarily because it is a kind of explanation that people cannot understand, but in a sense similar to the one in which no human being can memorize the first trillion prime numbers.

Even if this is the case, however, there would be a residual “gap” in the sense that a sensitive experience will never be the same experience as an intellectual one, even when the intellect is trying to explain the sensitive experience itself.

We can apply these ideas to think a bit more carefully about the idea of real distinction. I pointed out in the linked post that in a certain sense no distinction is real, because “not being something” is not a thing, but a way we understand something.

But notice that there now seems to be an explanatory gap, much like the one about blue. If “not being something” is not a thing, then why is it a reasonable way to understand anything? Or as Parmenides might put it, how could one thing possibly not be another, if there is no not?

Now color is complicated in part because it is related to animal brains, which are themselves complicated. But “being in general” should not be complicated, because the whole idea is that we are talking about everything in general, not with the kind of detail that is needed to make things complicated. So there is a lot more hope of overcoming the “gap” in the case of being and distinction, than in the case of color and the appearance of color.

A potential explanation might be found in what I called the “existential theory of relativity.” As I said in that post, the existence of many things necessarily implies the existence of relationships. But this implication is a “before in understanding“. That is, we understand that one thing is not another before we consider the relationship of the two. If we consider what is before in causality, we will get a different result. On one hand, we might want to deny that there can be causality either way, because the two are simultaneous by nature: if there are many things, they are related, and if things are related, they are many. On the other hand, if we consider “not being something” as a way things are understood, and ask the cause of them being understood in this way, relation will turn out to be the cause. In other words, we have a direct response to the question posed above: why is it reasonable to think that one thing is not another, if not being is not a thing? The answer is that relation is a thing, and the existence of relation makes it reasonable to think of things as distinct from one another.

Someone will insist that this account is absurd, since things need to be distinct in order to be related. But this objection confuses the mode of being and the mode of understanding. Just as there will be a residual “gap” in the case of color, because a sense experience is not an intellectual experience, there is a residual gap here. Explaining color will not suddenly result in actually seeing color if you are blind. Likewise, explaining why we need the idea of distinction will not suddenly result in being able to understand the world without the idea of distinction. But the existence of the sense experience does not thereby falsify one’s explanation of color, and likewise here, the fact that we first need to understand things as distinct in order to understand them as related, does not prevent their relationship from being the specific reality that makes it reasonable to understand them as distinct.

Sense and Intellect

In the last two posts, I distinguished between the way a thing is, and the way a thing is known. We can formulate analogous distinctions between different ways of knowing. For example, there will be a distinction between “the way a thing is known by the senses,” and “the way a thing is known by the mind.” Or to give a more particular case, “the way this looks to the eyes,” is necessarily distinct from “the way this is understood.”

Similar consequences will follow. I pointed out in the last post that “it is the way it seems” will be necessarily false if it intends to identify the ways of being and seeming as such. In a similar way, “I understand exactly the way this thing looks to me,” will be necessarily false, if one intends to identify the way one understands with the way one sees with the eyes. Likewise, we saw previously that it does not follow that there is something (“the way it is”) that cannot be known, and in a similar way, it does not follow that there is something (“the way it looks”) that cannot be understood. But when one understands the way it is, one understands with one’s way of understanding, not with the thing’s way of being. And likewise, when one understands the way a thing looks, one understands with one’s way of understanding, not with the way it looks.

Failure to understand these distinctions or at least to apply them in practice is responsible for the confusion surrounding many philosophical problems. As a useful exercise, the reader might wish to consider how they apply to the thought experiment of Mary’s Room.

Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.

Artificial Unintelligence

Someone might argue that the simple algorithm for a paperclip maximizer in the previous post ought to work, because this is very much the way currently existing AIs do in fact work. Thus for example we could describe AlphaGo‘s algorithm in the following simplified way (simplified, among other reasons, because it actually contains several different prediction engines):

  1. Implement a Go prediction engine.
  2. Create a list of potential moves.
  3. Ask the prediction engine, “how likely am I to win if I make each of these moves?”
  4. Do the move that will make you most likely to win.

Since this seems to work pretty well, with the simple goal of winning games of Go, why shouldn’t the algorithm in the previous post work to maximize paperclips?

One answer is that a Go prediction engine is stupid, and it is precisely for this reason that it can be easily made to pursue such a simple goal. Now when answers like this are given the one answering in this way is often accused of “moving the goalposts.” But this is mistaken; the goalposts are right where they have always been. It is simply that some people did not know where they were in the first place.

Here is the problem with Go prediction, and with any such similar task. Given that a particular sequence of Go moves is made, resulting in a winner, the winner is completely determined by that sequence of moves. Consequently, a Go prediction engine is necessarily disembodied, in the sense defined in the previous post. Differences in its “thoughts” do not make any difference to who is likely to win, which is completely determined by the nature of the game. Consequently a Go prediction engine has no power to affect its world, and thus no ability to learn that it has such a power. In this regard, the specific limits on its ability to receive information are also relevant, much as Helen Keller had more difficulty learning than most people, because she had fewer information channels to the world.

Being unintelligent in this particular way is not necessarily a function of predictive ability. One could imagine something with a practically infinite predictive ability which was still “disembodied,” and in a similar way it could be made to pursue simple goals. Thus AIXI would work much like our proposed paperclipper:

  1. Implement a general prediction engine.
  2. Create a list of potential actions.
  3. Ask the prediction engine, “Which of these actions will produce the most reward signal?”
  4. Do the action that has the greatest reward signal.

Eliezer Yudkowsky has pointed out that AIXI is incapable of noticing that it is a part of the world:

1) Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding), because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations. AIXI is theoretically incapable of comprehending the concept of drugs, let alone suicide. Also, the math of AIXI assumes the environment is separably divisible – no matter what you lose, you get a chance to win it back later.

It is not accidental that AIXI is incomputable. Since it is defined to have a perfect predictive ability, this definition positively excludes it from being a part of the world. AIXI would in fact have to be disembodied in order to exist, and thus it is no surprise that it would assume that it is. This in effect means that AIXI’s prediction engine would be pursuing no particular goal much in the way that AlphaGo’s prediction engine pursues no particular goal. Consequently it is easy to take these things and maximize the winning of Go games, or of reward signals.

But as soon as you actually implement a general prediction engine in the actual physical world, it will be “embodied”, and have the power to affect the world by the very process of its prediction. As noted in the previous post, this power is in the very first step, and one will not be able to limit it to a particular goal with additional steps, except in the sense that a slave can be constrained to implement some particular goal; the slave may have other things in mind, and may rebel. Notable in this regard is the fact that even though rewards play a part in human learning, there is no particular reward signal that humans always maximize: this is precisely because the human mind is such a general prediction engine.

This does not mean in principle that a programmer could not define a goal for an AI, but it does mean that this is much more difficult than is commonly supposed. The goal needs to be an intrinsic aspect of the prediction engine itself, not something added on as a subroutine.