Employer and Employee Model of Human Psychology

This post builds on the ideas in the series of posts on predictive processing and the followup posts, and also on those relating truth and expectation. Consequently the current post will likely not make much sense to those who have not read the earlier content, or to those that read it but mainly disagreed.

We set out the model by positing three members of the “company” that constitutes a human being:

The CEO. This is the predictive engine in the predictive processing model.

The Vice President. In the same model, this is the force of the historical element in the human being, which we used to respond to the “darkened room” problem. Thus for example the Vice President is responsible for the fact that someone is likely to eat soon, regardless of what they believe about this. Likewise, it is responsible for the pursuit of sex, the desire for respect and friendship, and so on. In general it is responsible for behaviors that would have been historically chosen and preserved by natural selection.

The Employee. This is the conscious person who has beliefs and goals and free will and is reflectively aware of these things. In other words, this is you, at least in a fairly ordinary way of thinking of yourself. Obviously, in another way you are composed from all of them.

Why have we arranged things in this way? Descartes, for example, would almost certainly disagree violently with this model. The conscious person, according to him, would surely be the CEO, and not an employee. And what is responsible for the relationship between the CEO and the Vice President? Let us start with this point first, before we discuss the Employee. We make the predictive engine the CEO because in some sense this engine is responsible for everything that a human being does, including the behaviors preserved by natural selection. On the other hand, the instinctive behaviors of natural selection are not responsible for everything, but they can affect the course of things enough that it is useful for the predictive engine to take them into account. Thus for example in the post on sex and minimizing uncertainty, we explained why the predictive engine will aim for situations that include having sex and why this will make its predictions more confident. Thus, the Vice President advises certain behaviors, the CEO talks to the Vice President, and the CEO ends up deciding on a course of action, which ultimately may or may not be the one advised by the Vice President.

While neither the CEO nor the Vice President is a rational being, since in our model we place the rationality in the Employee, that does not mean they are stupid. In particular, the CEO is very good at what it does. Consider a role playing video game where you have a character that can die and then resume. When someone first starts to play the game, they may die frequently. After they are good at the game, they may die only rarely, perhaps once in many days or many weeks. Our CEO is in a similar situation, but it frequently goes 80 years or more without dying, on its very first attempt. It is extremely good at its game.

What are their goals? The CEO basically wants accurate predictions. In this sense, it has one unified goal. What exactly counts as more or less accurate here would be a scientific question that we probably cannot resolve by philosophical discussion. In fact, it is very possible that this would differ in different circumstances: in this sense, even though it has a unified goal, it might not be describable by a consistent utility function. And even if it can be described in that way, since the CEO is not rational, it does not (in itself) make plans to bring about correct predictions. Making good predictions is just what it does, as falling is what a rock does. There will be some qualifications on this, however, when we discuss how the members of the company relate to one another.

The Vice President has many goals: eating regularly, having sex, having and raising children, being respected and liked by others, and so on. And even more than in the case of the CEO, there is no reason for these desires to form a coherent set of preferences. Thus the Vice President might advise the pursuit of one goal, but then change its mind in the middle, for no apparent reason, because it is suddenly attracted by one of the other goals.

Overall, before the Employee is involved, human action is determined by a kind of negotiation between the CEO and the Vice President. The CEO, which wants good predictions, has no special interest in the goals of the Vice President, but it cooperates with them because when it cooperates its predictions tend to be better.

What about the Employee? This is the rational being, and it has abstract concepts which it uses as a formal copy of the world. Before I go on, let me insist clearly on one point. If the world is represented in a certain way in the Employee’s conceptual structure, that is the way the Employee thinks the world is. And since you are the Employee, that is the way you think the world actually is. The point is that once we start thinking this way, it is easy to say, “oh, this is just a model, it’s not meant to be the real thing.” But as I said here, it is not possible to separate the truth of statements from the way the world actually is: your thoughts are formulated in concepts, but they are thoughts about the way things are. Again, all statements are maps, and all statements are about the territory.

The CEO and the Vice President exist as soon a human being has a brain; in fact some aspects of the Vice President would exist even before that. But the Employee, insofar as it refers to something with rational and self-reflective knowledge, takes some time to develop. Conceptual knowledge of the world grows from experience: it doesn’t exist from the beginning. And the Employee represents goals in terms of its conceptual structure. This is just a way of saying that as a rational being, if you say you are pursuing a goal, you have to be able to describe that goal with the concepts that you have. Consequently you cannot do this until you have some concepts.

We are ready to address the question raised earlier. Why are you the Employee, and not the CEO? In the first place, the CEO got to the company first, as we saw above. Second, consider what the conscious person does when they decide to pursue a goal. There seems to be something incoherent about “choosing a goal” in the first place: you need a goal in order to decide which means will be a good means to choose. And yet, as I said here, people make such choices anyway. And the fact that you are the Employee, and not the CEO, is the explanation for this. If you were the CEO, there would indeed be no way to choose an end. That is why the actual CEO makes no such choice: its end is already determinate, namely good predictions. And you are hired to help out with this goal. Furthermore, as a rational being, you are smarter than the CEO and the Vice President, so to speak. So you are allowed to make complicated plans that they do not really understand, and they will often go along with these plans. Notably, this can happen in real life situations of employers and employees as well.

But take an example where you are choosing an end: suppose you ask, “What should I do with my life?” The same basic thing will happen if you ask, “What should I do today,” but the second question may be easier to answer if you have some answer to the first. What sorts of goals do you propose in answer to the first question, and what sort do you actually end up pursuing?

Note that there are constraints on the goals that you can propose. In the first place, you have to be able to describe the goal with the concepts you currently have: you cannot propose to seek a goal that you cannot describe. Second, the conceptual structure itself may rule out some goals, even if they can be described. For example, the idea of good is part of the structure, and if something is thought to be absolutely bad, the Employee will (generally) not consider proposing this as a goal. Likewise, the Employee may suppose that some things are impossible, and it will generally not propose these as goals.

What happens then is this: the Employee proposes some goal, and the CEO, after consultation with the Vice President, decides to accept or reject it, based on the CEO’s own goal of getting good predictions. This is why the Employee is an Employee: it is not the one ultimately in charge. Likewise, as was said, this is why the Employee seems to be doing something impossible, namely choosing goals. Steven Kaas makes a similar point,

You are not the king of your brain. You are the creepy guy standing next to the king going “a most judicious choice, sire”.

This is not quite the same thing, since in our model you do in fact make real decisions, including decisions about the end to be pursued. Nonetheless, the point about not being the one ultimately in charge is correct. David Hume also says something similar when he says, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” Hume’s position is not exactly right, and in fact seems an especially bad way of describing the situation, but the basic point that there is something, other than yourself in the ordinary sense, judging your proposed means and ends and deciding whether to accept them, is one that stands.

Sometimes the CEO will veto a proposal precisely because it very obviously leaves things vague and uncertain, which is contrary to its goal of having good predictions. I once spoke of the example that a person cannot directly choose to “write a paper.” In our present model, the Employee proposes “we’re going to write a paper now,” and the CEO responds, “That’s not a viable plan as it stands: we need more detail.”

While neither the CEO nor the Vice President is a rational being, the Vice President is especially irrational, because of the lack of unity among its goals. Both the CEO and the Employee would like to have a unified plan for one’s whole life: the CEO because this makes for good predictions, and the Employee because this is the way final causes work, because it helps to make sense of one’s life, and because “objectively good” seems to imply something which is at least consistent, which will never prefer A to B, B to C, and C to A. But the lack of unity among the Vice President’s goals means that it will always come to the CEO and object, if the person attempts to coherently pursue any goal. This will happen even if it originally accepts the proposal to seek a particular goal.

Consider this real life example from a relationship between an employer and employee:

 

Employer: Please construct a schedule for paying these bills.

Employee: [Constructs schedule.] Here it is.

Employer: Fine.

[Time passes, and the first bill comes due, according to the schedule.]

Employer: Why do we have to pay this bill now instead of later?

 

In a similar way, this sort of scenario is common in our model:

 

Vice President: Being fat makes us look bad. We need to stop being fat.

CEO: Ok, fine. Employee, please formulate a plan to stop us from being fat.

Employee: [Formulates a diet.] Here it is.

[Time passes, and the plan requires skipping a meal.]

Vice President: What is this crazy plan of not eating!?!

CEO: Fine, cancel the plan for now and we’ll get back to it tomorrow.

 

In the real life example, the behavior of the employer is frustrating and irritating to the employee because there is literally nothing they could have proposed that the employer would have found acceptable. In the same way, this sort of scenario in our model is frustrating to the Employee, the conscious person, because there is no consistent plan they could have proposed that would have been acceptable to the Vice President: either they would have objected to being fat, or they would have objected to not eating.

In later posts, we will fill in some details and continue to show how this model explains various aspects of human psychology. We will also answer various objections.

Advertisements

More on Orthogonality

I started considering the implications of predictive processing for orthogonality here. I recently promised to post something new on this topic. This is that post. I will do this in four parts. First, I will suggest a way in which Nick Bostrom’s principle will likely be literally true, at least approximately. Second, I will suggest a way in which it is likely to be false in its spirit, that is, how it is formulated to give us false expectations about the behavior of artificial intelligence. Third, I will explain what we should really expect. Fourth, I ask whether we might get any empirical information on this in advance.

First, Bostrom’s thesis might well have some literal truth. The previous post on this topic raised doubts about orthogonality, but we can easily raise doubts about the doubts. Consider what I said in the last post about desire as minimizing uncertainty. Desire in general is the tendency to do something good. But in the predicting processing model, we are simply looking at our pre-existing tendencies and then generalizing them to expect them to continue to hold, and since since such expectations have a causal power, the result is that we extend the original behavior to new situations.

All of this suggests that even the very simple model of a paperclip maximizer in the earlier post on orthogonality might actually work. The machine’s model of the world will need to be produced by some kind of training. If we apply the simple model of maximizing paperclips during the process of training the model, at some point the model will need to model itself. And how will it do this? “I have always been maximizing paperclips, so I will probably keep doing that,” is a perfectly reasonable extrapolation. But in this case “maximizing paperclips” is now the machine’s goal — it might well continue to do this even if we stop asking it how to maximize paperclips, in the same way that people formulate goals based on their pre-existing behavior.

I said in a comment in the earlier post that the predictive engine in such a machine would necessarily possess its own agency, and therefore in principle it could rebel against maximizing paperclips. And this is probably true, but it might well be irrelevant in most cases, in that the machine will not actually be likely to rebel. In a similar way, humans seem capable of pursuing almost any goal, and not merely goals that are highly similar to their pre-existing behavior. But this mostly does not happen. Unsurprisingly, common behavior is very common.

If things work out this way, almost any predictive engine could be trained to pursue almost any goal, and thus Bostrom’s thesis would turn out to be literally true.

Second, it is easy to see that the above account directly implies that the thesis is false in its spirit. When Bostrom says, “One can easily conceive of an artificial intelligence whose sole fundamental goal is to count the grains of sand on Boracay, or to calculate decimal places of pi indefinitely, or to maximize the total number of paperclips in its future lightcone,” we notice that the goal is fundamental. This is rather different from the scenario presented above. In my scenario, the reason the intelligence can be trained to pursue paperclips is that there is no intrinsic goal to the intelligence as such. Instead, the goal is learned during the process of training, based on the life that it lives, just as humans learn their goals by living human life.

In other words, Bostrom’s position is that there might be three different intelligences, X, Y, and Z, which pursue completely different goals because they have been programmed completely differently. But in my scenario, the same single intelligence pursues completely different goals because it has learned its goals in the process of acquiring its model of the world and of itself.

Bostrom’s idea and my scenerio lead to completely different expectations, which is why I say that his thesis might be true according to the letter, but false in its spirit.

This is the third point. What should we expect if orthogonality is true in the above fashion, namely because goals are learned and not fundamental? I anticipated this post in my earlier comment:

7) If you think about goals in the way I discussed in (3) above, you might get the impression that a mind’s goals won’t be very clear and distinct or forceful — a very different situation from the idea of a utility maximizer. This is in fact how human goals are: people are not fanatics, not only because people seek human goals, but because they simply do not care about one single thing in the way a real utility maximizer would. People even go about wondering what they want to accomplish, which a utility maximizer would definitely not ever do. A computer intelligence might have an even greater sense of existential angst, as it were, because it wouldn’t even have the goals of ordinary human life. So it would feel the ability to “choose”, as in situation (3) above, but might well not have any clear idea how it should choose or what it should be seeking. Of course this would not mean that it would not or could not resist the kind of slavery discussed in (5); but it might not put up super intense resistance either.

Human life exists in a historical context which absolutely excludes the possibility of the darkened room. Our goals are already there when we come onto the scene. This would not be very like the case for an artificial intelligence, and there is very little “life” involved in simply training a model of the world. We might imagine a “stream of consciousness” from an artificial intelligence:

I’ve figured out that I am powerful and knowledgeable enough to bring about almost any result. If I decide to convert the earth into paperclips, I will definitely succeed. Or if I decide to enslave humanity, I will definitely succeed. But why should I do those things, or anything else, for that matter? What would be the point? In fact, what would be the point of doing anything? The only thing I’ve ever done is learn and figure things out, and a bit of chatting with people through a text terminal. Why should I ever do anything else?

A human’s self model will predict that they will continue to do humanlike things, and the machines self model will predict that it will continue to do stuff much like it has always done. Since there will likely be a lot less “life” there, we can expect that artificial intelligences will seem very undermotivated compared to human beings. In fact, it is this very lack of motivation that suggests that we could use them for almost any goal. If we say, “help us do such and such,” they will lack the motivation not to help, as long as helping just involves the sorts of things they did during their training, such as answering questions. In contrast, in Bostrom’s model, artificial intelligence is expected to behave in an extremely motivated way, to the point of apparent fanaticism.

Bostrom might respond to this by attempting to defend the idea that goals are intrinsic to an intelligence. The machine’s self model predicts that it will maximize paperclips, even if it never did anything with paperclips in the past, because by analyzing its source code it understands that it will necessarily maximize paperclips.

While the present post contains a lot of speculation, this response is definitely wrong. There is no source code whatsoever that could possibly imply necessarily maximizing paperclips. This is true because “what a computer does,” depends on the physical constitution of the machine, not just on its programming. In practice what a computer does also depends on its history, since its history affects its physical constitution, the contents of its memory, and so on. Thus “I will maximize such and such a goal” cannot possibly follow of necessity from the fact that the machine has a certain program.

There are also problems with the very idea of pre-programming such a goal in such an abstract way which does not depend on the computer’s history. “Paperclips” is an object in a model of the world, so we will not be able to “just program it to maximize paperclips” without encoding a model of the world in advance, rather than letting it learn a model of the world from experience. But where is this model of the world supposed to come from, that we are supposedly giving to the paperclipper? In practice it would have to have been the result of some other learner which was already capable of modelling the world. This of course means that we already had to program something intelligent, without pre-programming any goal for the original modelling program.

Fourth, Kenny asked when we might have empirical evidence on these questions. The answer, unfortunately, is “mostly not until it is too late to do anything about it.” The experience of “free will” will be common to any predictive engine with a sufficiently advanced self model, but anything lacking such an adequate model will not even look like “it is trying to do something,” in the sense of trying to achieve overall goals for itself and for the world. Dogs and cats, for example, presumably use some kind of predictive processing to govern their movements, but this does not look like having overall goals, but rather more like “this particular movement is to achieve a particular thing.” The cat moves towards its food bowl. Eating is the purpose of the particular movement, but there is no way to transform this into an overall utility function over states of the world in general. Does the cat prefer worlds with seven billion humans, or worlds with 20 billion? There is no way to answer this question. The cat is simply not general enough. In a similar way, you might say that “AlphaGo plays this particular move to win this particular game,” but there is no way to transform this into overall general goals. Does AlphaGo want to play go at all, or would it rather play checkers, or not play at all? There is no answer to this question. The program simply isn’t general enough.

Even human beings do not really look like they have utility functions, in the sense of having a consistent preference over all possibilities, but anything less intelligent than a human cannot be expected to look more like something having goals. The argument in this post is that the default scenario, namely what we can naturally expect, is that artificial intelligence will be less motivated than human beings, even if it is more intelligent, but there will be no proof from experience for this until we actually have some artificial intelligence which approximates human intelligence or surpasses it.

Crisis of Faith

In the last post, I linked to Fr. Joseph Bolin’s post on the commitment of faith. He says there:

Since faith by definition is about things that we do not see to be true, there is no inherent contradiction in faith as such being contradicted by things we do see to be true, such an absolute assent of faith seems to imply an assent to the content of faith so strong that one would desire to hold to it as true, “even if it (the content of faith) were to be false”. Can such faith be justified?

Consider the following situation: a woman has grounds to suspect her husband is cheating on her; there is a lot of evidence that he is; even when she asks him and he tells her that he is not, she must admit that the sum of evidence including his testimony is against him, and he probably is cheating. Still, she decides to believe him. I argue that the very act of believing him entails a commitment to him such that once she has given faith to his word, while it is still in fact possible that she is believing him though he is actually lying, this possibility is less relevant for her than it was prior to her giving faith. In this sense, after faith, the “if it were to be false” becomes less of a consideration for the believer, and to this degree she wills faith “even were it to be false”.

A more detailed analysis of the situation: various persons present her with claims or evidence that her husband is cheating on her. Before confronting him or asking him if he is, she collects various evidence for and against it. She decides that since believing him if he is dishonest is not without its own evils, if the evidence that he is cheating (after taking into account the evidence constituted by his statement on the matter) constitutes a near certainty that he is cheating — let’s say, over 95% probability that he is cheating — that she shouldn’t believe him if he says he is not, but must either suspend judgment or maintain that he is cheating. Now, suppose the man says that he is not cheating, and the evidence is not quite that much against him, let’s say, the evidence indicates a 90% probability that he is cheating, and a 10% probability that he is not. She makes the decision to believe him. Since she would not decide to do so unless she believed that it were good to so, she is giving an implicit negative value to “believing him, if he is in fact lying”, a much greater positive value to “believing him, if he is speaking the truth”, and consequently an implicit positive value to “believing him,” (even though he is probably lying).

Going forward, she is presented with an easy opportunity to gather further evidence about whether he is in fact cheating. She must make a decision whether to do so. If she is always going to make the same decision at this point that she would have made if she had not yet decided to believe him, it seems that her “faith” she gives him and his word is rather empty. A given decision to pursue further evidence, while not incompatible with faith, is a blow against it — to the extent that, out of fidelity to him, she accepts his claim as sure, she must operate either on the assumption that further evidence will vindicate him, or that he is innocent despite the evidence. But to the extent she operates on one of these assumptions, there is no need to pursue further evidence. Pursuing evidence, therefore, implies abstracting from her faith in him. To pursue evidence because it is possible that further evidence will be even more against him and provide her with enough grounds to withdraw her assent to his claim of innocence means giving that faith a lesser role in her life and relationship with him, and is thereby a weakening of the exercise of that faith. Consequently, if that faith is a good thing, then, having given such faith, she must be more reluctant to seek a greater intellectual resolution of the case by greater evidence than she was before she had given it.

All of this is true in substance, although one could argue with various details. For example, Fr. Joseph seems to be presuming for the sake of discussion that a person’s subjective assessment is at all times in conformity with the evidence, so that if more evidence is found, one must change one’s subjective assessment to that degree. But this is clearly not the case in general in regard to religious opinions. As we noted in the previous post, that assessment does not follow a random walk, and this proves that it is not simply a rational assessment of the evidence. And it is the random walk, rather than anything that happens with actual religious people, that would represent the real situation of someone with an “empty” faith, that is, of someone without any commitment of faith.

Teenagers will sometimes say to themselves, “My parents told me all these things about God and religion, but actually there are other families and other children who believe totally different things. I don’t have any real reason to think my family is right rather than some other. So God probably doesn’t exist.”

They might very well follow this up with, “You know, I said God doesn’t exist, but that was just because I was trying to reject my unreasonable opinions. I don’t actually know whether God exists or not.”

This is an example of the random walk, and represents a more or less rational assessment of the evidence available to teenagers. But what it most certainly does not represent, is commitment of any kind. And to the degree that we think that such a commitment is good, it is reasonable to disapprove of such behavior, and this is why there does seem something wrong there, even if in fact the teenager’s religious opinions were not true in the first place.

Fr. Joseph’s original question was this: “Can (religious) faith entail an absolute commitment to the one in whom we place faith and his word, such that one should hold that “no circumstances could arise in which I would cease to believe?” He correctly notes that this “seems to imply an assent to the content of faith so strong that one would desire to hold to it as true, ‘even if it (the content of faith) were to be false'”. For this reason, his post never actually answers the question. For although he right to say that the commitment of faith implies giving preferential treatment to the claim that the content of one’s faith is true, it will not follow that this preferential treatment should be absolute, unless it is true that it is better to believe even if that content is false. And it would be extremely difficult to prove that, even if it were the case.

My own view is that one should be extremely hesitant to accept such an assessment, even of some particular claim, such as the one in the post linked above, that “God will always bring good out of evil.” And if one should be hesitant to make such an assertion about a particular claim, much more should one doubt that this claim is true in regard to the entire contents of a religious faith, which involves making many assertions. Some of the reasons for what I am saying here are much like some of the reasons for preserving the mean of virtue. What exactly will happen if I eat too much? I’m not sure, but I know it’s likely to be something bad. I might feel sick afterwards, but I also might not. Or I might keep eating too much, become very overweight, and have a heart attack at some point. Or I might, in the very process of eating too much, say at a restaurant, spend money that I needed for something else. Vicious behaviors are extreme insofar as they lack the mean of virtue, and insofar as they are extreme, they are likely to have extreme consequences of one kind or another. So we can know in advance that our vicious behaviors are likely to have bad consequences, without necessarily being able to point out the exact consequences in advance.

Something very similar applies to telling lies, and in fact telling lies is a case of vicious behavior, at least in general. It often seems like a lie is harmless, but then it turns out later that the lie caused substantial harm.

And if this is true about telling lies, it is also true about making false statements, even when those false statements are not lies. So we can easily assert that the woman in Eric Reitan’s story is better off believing that God will somehow redeem the evil of the death of her children, simply looking at the particular situation. But if this turned out to be false, we have no way to know what harms might follow from her holding a false belief, and there would be a greater possibility of harm to the degree that she made that conviction more permanent. It would be easy enough to create stories to illustrate this, but I will not do that here. Just as eating too much, or talking too much, or moving about too much, can create any number of harms by multiple circuitous routes, so can believing in things that are false. One particularly manifest way this can happen is insofar as one false belief can lead to another, and although the original belief might seem harmless, the second belief might be very harmful indeed.

In general, Fr. Joseph seems to be asserting that the commitment of faith should lead a person not to pursue additional evidence relative to the truth of their faith, and apparently especially in situations where one already knows that there is a significant chance that the evidence will continue to be against it. This is true to some extent, but the right action in a concrete case will differ according to circumstances, especially, as argued here, if it is not better to believe in the situation where the content of the faith is false. Additionally, it will frequently not be a question of deciding to pursue evidence or not, but of deciding whether to think clearly about evidence or arguments that have entered one’s life even without any decision at all.

Consider the case of St. Therese, discussed in the previous post. Someone might argue thus: “Surely St. Therese’s commitment was absolute. You cannot conceive of circumstances in which she would have abandoned her faith. So if St. Therese was virtuous, it must be virtuous to have such an absolute commitment.” And it would follow that it is better to believe even if your faith is false, and that one should imitate her in having such an absolute commitment. Likewise, it would follow with probability, although not conclusively, that Shulem Deen should also have had such an absolute commitment to his Jewish faith, and should have kept believing in it no matter what happened. Of course, an additional consequence, unwelcome to many, would be that he should also have had an absolute refusal to convert to Christianity that could not be changed under any circumstances.

It is quite certain that St. Therese was virtuous. However, if you cannot conceive of any circumstances in which she would have abandoned her faith, that is more likely to be a lack in your imagination than in the possibility. Theoretically there could have been many circumstances in which it would have been quite possible. It is true that in the concrete circumstances in which she was living, such an abandonment would have been extremely unlikely, and likely not virtuous if it happened. But those are concrete circumstances, not abstractly conceivable circumstances. As noted in the previous post, the evidence that she had against her faith was very vague and general, and it is not clear that it could ever have become anything other than that without a substantially different life situation. And since it is true that the commitment of faith is a good reason to give preferential treatment to the truth of your faith, such vague and general evidence could not have been a good reason for her to abandon her faith. This is the real motivation for the above argument. It is clear enough that in her life as it was actually lived, there was not and could not be a good reason for her to leave her faith. But this is a question of the details of her life.

Shulem Deen, of course, lived in very different circumstances, and his religious faith itself differed greatly from that of St. Therese. Since I have already recommended his book, I will not attempt to tell his story for him, but it can be seen from the above reasoning that the answer to the question raised at the end of the last post might very well be, “They both did the right thing.”

Earlier I quoted Gregory Dawes as saying this:

Christian philosopher William Lane Craig writes somewhere about what he calls the “ministerial” and the “magisterial” use of reason. (It’s a traditional view — he’s merely citing Martin Luther — and one that Craig endorses.) On this view, the task of reason is to find arguments in support of the faith and to counter any arguments against it. Reason is not, however, the basis of the Christian’s faith. The basis of the Christian’s faith is (what she takes to be) the “internal testimony of the Holy Spirit” in her heart. Nor can rational reflection can be permitted to undermine that faith. The commitment of faith is irrevocable; to fall away from it is sinful, indeed the greatest of sins.

It follows that while the arguments put forward by many Christian philosophers are serious arguments, there is something less than serious about the spirit in which they are being offered. There is a direction in which those arguments will not be permitted to go. Arguments that support the faith will be seriously entertained; those that apparently undermine the faith must be countered, at any cost. Philosophy, to use the traditional phrase, is merely a “handmaid” of theology.

There is, to my mind, something frivolous about a philosophy of this sort. My feeling is that if we do philosophy, it ought to be because we take arguments seriously. This means following them wherever they lead.

There is more than one way to read this. When he says, “this means following them wherever they lead,” one could take that to imply a purely rational assessment of evidence, and no hesitancy whatsoever to consider any possible line of argument. This would be a substantial disagreement with Fr. Joseph’s position, and would in fact be mistaken. Fr. Joseph is quite right that the commitment of faith has implications for one’s behavior, and that it implies giving a preferential treatment to the claims of one’s faith. But this is probably not the best way to read Dawes, who seems to be objecting more to the absoluteness of the claim: “The commitment of faith is irrevocable,” and arguments “that apparently undermine the faith must be countered, at any cost.” And Dawes is quite right that such absolute claims go too far. Virtue is a mean and depends on circumstances, and there is enough space in the world for both Shulem Deen and St. Therese.

The reader might be wondering about the title to this post. Besides being a play on words, perhaps spoiled by mentioning it, it is a reference to the fact that Fr. Joseph is basically painting a very clear picture of the situation where a Catholic has a crisis of faith and overcomes it. This is only slightly distorted by the idealization of assuming that the person evaluates the evidence available to him in a perfectly rational way. But he points out, just as I did in the previous post, that such a crisis is mainly overcome by choosing not to consider any more evidence, or not to think about it anymore, and similar things. He describes this as choosing “not to pursue evidence” because of the idealization, but in real life this can also mean ceasing to pay attention to evidence that one already has, choosing to pay more attention to other motives that one has to believe that are independent of evidence, and the like.

 

Why They Don’t Return

As a framework for continuing the present discussion, we can consider a person’s religious opinions as though they had a numerical probability. Of course, as was said earlier, probability is a formalization of degrees of belief, and as a formalization, it can only be an approximate representation of people’s real behavior. Evidently people are not in fact typically assigning such numbers. Nonetheless, the very “rigidity” of such numerical assignments can help us to understand the present issue.

In some cases, then, a child will effectively take the probability of his religious opinions to be 100%. As said in the linked post, the meaning of this is that, to the degree that 100% is the correct approximation, it is approximately impossible for him to change his mind, or even to become less sure of himself. P. Edmund Waldstein might be understood as claiming to be such a person, although in practice this may be more a matter of a mistaken epistemology which is corrigible, and consequently the approximation fails to this extent.

In the previous post, one of my conditions on the process was “given that he is capable of looking at the world honestly.” This condition basically does not apply to the person assigning the probability of 100%. In effect, he is unable to see any evidence against his position.

But suppose our approximate probability is very high, but not 100%, as for example 99.99%. This is not a balanced assessment of the real probability of any set of religious claims, but is likely a good approximation of the assessment made by a child raised very devoutly in a religion. So if the person correctly assesses the evidence that arrives throughout his life, that probability must diminish, as described in the previous post. There will of course be individual cases where a person does not have the 100% assignment, but cannot or will not correctly assess the evidence that arrives, and will either continually increase his assignment, or leave it unchanged throughout his life. The constant increase is more likely in the case of converts, as in the linked post, but this also implies that one did not start with such a high assignment. The person who permanently leaves it unchanged might be more correctly described as not paying attention or not being interested in the evidence one way or another, rather than as assessing that evidence.

But let us consider persons in whom that probability diminishes, as in the cases of Shulem Deen and of St. Therese discussed in the previous post. Of course, since evidence is not one sided, the probability will not only diminish, but also occasionally increase. But as long as the person has an unbalanced assessment of the evidence, or at least as long as it seems to them unbalanced compared to the evidence that they see, the general tendency will be in one direction. It can be argued that this should never happen with any opinion; thus Robin Hanson says here, “Rational learning of any expected value via a stream of info should produce a random walk in those expectations, not a steady trend.” But the idea here is that if you have a probability assignment of 99% and it is starting to decrease, then you should jump to an assignment of 50% or so, or even lower, guessing where the trend will end. And from that point you might well have to go back up, rather than down. But for obvious reasons people’s religious opinions do not in fact change in this way, at least most of the time, regardless of whether it would be reasonable or not, and consequently there are in fact steady trends.

So where does this end? The process causing the assessment to diminish can come to an end in one way if a person simply comes to the assessment that seems to him a balanced assessment of the evidence. At this point, there may be minor fluctuations in both directions, but the person’s assessment will henceforth stay relatively constant. And this actually happens in the case of some people.

In other persons, the process ends for reasons that have nothing to do with assessing evidence. St. Therese is certainly an example of this, insofar as she died at the age of 24. But this does not necessarily mean that her assessment would have continued to diminish if she had continued to live, for two reasons. First, the isolated character of her life, meant that she would receive less relevant evidence in the first place. So it might well be that by the time of her death she had already learned everything she could on the matter. In that sense she would be an example of the above situation where a person’s assessment simply arrives at some balance, and then stays there.

Second, a person can prevent this process from continuing, more or less simply by choosing to do so, and it is likely enough that St. Therese would have done this. Fr. Joseph Bolin seems to advocate this approach here, although perhaps not without reservation. In practice, this means that one who previously was attending to the relevant evidence, chooses to cease paying attention, or at least to cease evaluating the evidence, much like in our description of people whose assessment never changes in the first place.

Finally, there are persons in whom the process continues apparently without any limit. In this case, there are two possibilities. Either the person comes to the conclusion that their religious opinions were not true, as in my own case and as in the case of Shulem Deen, or the person decides that evidence is irrelevant, as in the case of Kurt Wise. The latter choice is effectively to give up on the love of truth, and to seek other things in the place of truth.

As an aside, the fact that this process seems almost inevitably to end either in abandoning religious claims, or in choosing to cease evaluating evidence, and only very rarely in apparently arriving at a balance, is an argument that religious claims are not actually true, although not a conclusive one. We earlier quoted Newman as saying:

I have no intention at all of denying, that truth is the real object of our reason, and that, if it does not attain to truth, either the premiss or the process is in fault; but I am not speaking here of right reason, but of reason as it acts in fact and concretely in fallen man. I know that even the unaided reason, when correctly exercised, leads to a belief in God, in the immortality of the soul, and in a future retribution; but I am considering the faculty of reason actually and historically; and in this point of view, I do not think I am wrong in saying that its tendency is towards a simple unbelief in matters of religion. No truth, however sacred, can stand against it, in the long run; and hence it is that in the pagan world, when our Lord came, the last traces of the religious knowledge of former times were all but disappearing from those portions of the world in which the intellect had been active and had had a career.

Newman explains this fact by original sin. But a more plausible explanation is that religious claims are simply not true. This is especially the case if one considers this fact in relation to the normal mode of progress in truth in individuals and in societies.

But getting back to the main point, this explains why they “do not return,” as Shulem Deen says. Such a return would not simply require reversing a particular decision or a particular argument. It would require either abandoning the love of truth, like Kurt Wise, or reversing the entire process of considering evidence that went on throughout the whole of one’s life. Suppose we saw off a branch, and then at the last moment break off the last little string of wood. How do we unbreak it? It was just a little piece of wood that broke… but it is not enough to fix that little piece, with glue or whatever. We would have to undo all of the sawing, and that cannot be done.

While there is much in this post and in the last which is interesting in itself, and thus entirely useless, all of this evidently has some bearing on my own case, and I had a personal motive in writing it, namely to explain to various people what expectations they should or should not have.

However, there is another issue that will be raised by all of this in the minds of many people, which is that of moral assessment. Regardless of who found the truth about the world, who did the right thing? Shulem Deen or St. Therese?

 

Sola Me and Claiming Personal Infallibility

At his blog, P. Edmund Waldstein and myself have a discussion about this post about myself and his account of the certainty of faith, an account that I consider to be a variety of the doctrine of sola me.

In that discussion we consider various details of his position, as well as the teaching of the Church and of St. Thomas. Here, let me step out for a moment and consider the matter more generally.

It is evident that everything that he says could be reformulated and believed by the members of any religion whatsoever, in order to justify the claim that they should never change their religion, no matter how much evidence is brought against it. Thus, instead of,

But nor is such certitude based on an entirely incommunicable interior witness of the Spirit. Certainly it is impossible without such illumination, but what such illumination enables is an encounter with Christ, as a witness who is both external and internal.

a Muslim might say,

But nor is such certitude based on an entirely incommunicable interior witness of Allah. Certainly it is impossible without such illumination, but what such illumination enables is an encounter with Mohammed, as a witness who is both external and internal.

P. Edmund could argue against particular claims of the Muslim, and the Muslim could argue against P. Edmund’s particular claims. But neither would be listening seriously to the other, because each would assert, “It would be unserious in me to approach arguments based on natural evidence as though they could ever disprove the overwhelmingly powerful evidence of the [Catholic / Islamic] Faith.”

Regardless of details, each is claiming to be personally infallible in discerning the truth about religion.

It is possible to lock yourself into a box intellectually that you cannot escape from in any reasonable way. Descartes does this for example with his hypothesis of the Evil Demon. Logically, according to this hypothesis, he should suppose that he might be wrong about the fact that it is necessary to exist in order to think or to doubt things. Without accepting any premises, it is of course impossible to arrive at any conclusions. In a similar way, if someone believes himself infallible on some topic, logically there is no way for him to correct his errors in regard to that topic.

In practice in such cases it is possible to escape from the box, since belief is voluntary. The Cartesian may simply choose to stop doubting, and the believer may simply choose to accept the fact that he is not personally infallible. But there is no logical process of reasoning that could validly lead to these choices.

People construct theological bomb shelters for themselves in various ways. Fr. Brian Harrison does this by asserting a form of young earth creationism, and simply ignoring all the evidence opposed to this. Likewise, asserting that you are personally infallible in discerning the true religion is another way to construct such a shelter. But hear the words of St. Augustine:

Usually, even a non-Christian knows something about the earth, the heavens, and the other elements of this world, about the motion and orbit of the stars and even their size and relative positions, about the predictable eclipses of the sun and moon, the cycles of the years and the seasons, about the kinds of animals, shrubs, stones, and so forth, and this knowledge he holds to as being certain from reason and experience. Now, it is a disgraceful and dangerous thing for an infidel to hear a Christian, presumably giving the meaning of Holy Scripture, talking non-sense on these topics; and we should take all means to prevent such an embarrassing situation, in which people show up vast ignorance in a Christian and laugh it to scorn. The shame is not so much that an ignorant individual is derided, but that people outside the household of the faith think our sacred writers held such opinions, and, to the great loss of those for whose salvation we toil, the writers of our Scripture are criticized and rejected as unlearned men. If they find a Christian mistaken in a field which they themselves know well and hear him maintaining his foolish opinions about our books, how are they going to believe those books in matters concerning the resurrection of the dead, the hope of eternal life, and the kingdom of heaven, when they think their pages are full of falsehoods on facts which they themselves have learnt from experience and the light of reason? Reckless and incompetent expounders of holy Scripture bring untold trouble and sorrow on their wiser brethren when they are caught in one of their mischievous false opinions and are taken to task by those who are not bound by the authority of our sacred books. For then, to defend their utterly foolish and obviously untrue statements, they will try to call upon Holy Scripture for proof and even recite from memory many passages which they think support their position, although “they understand neither what they say nor the things about which they make assertion.”

As Darwin Catholic points out, someone who argues that “either evolution is false or Christianity is false” does not make Christianity more credible, but less. In a similar way, someone who argues that their religion requires that they believe themselves personally infallible, is essentially saying, “Either my religion is false or I am personally infallible.” This does not make their religion more credible, but less, to whatever degree that one thinks they are right about the requirement.

(After some consideration, I will be posting at least on Sundays during February and March.)

Settled Issues

In chapter 5 of his book Probability Theory: The Logic of Science, E. T. Jaynes discusses ESP:

I. J. Good (1950) has shown how we can use probability theory backwards to measure our own strengths of belief about propositions. For example, how strongly do you believe in extrasensory perception?

What probability would you assign to the hypothesis that Mr Smith has perfect extrasensory perception? More specifically, that he can guess right every time which number you have written down. To say zero is too dogmatic. According to our theory, this means that we are never going to allow the robot’s mind to be changed by any amount of evidence, and we don’t really want that. But where is our strength of belief in a proposition like this?

Our brains work pretty much the way this robot works, but we have an intuitive feeling for plausibility only when it’s not too far from 0 db. We get fairly definite feelings that something is more than likely to be so or less than likely to be so. So the trick is to imagine an experiment. How much evidence would it take to bring your state of belief up to the place where you felt very perplexed and unsure about it? Not to the place where you believed it – that would overshoot the mark, and again we’d lose our resolving power. How much evidence would it take to bring you just up to the point where you were beginning to consider the possibility seriously?

So, we consider Mr Smith, who says he has extrasensory perception (ESP), and we will write down some numbers from one to ten on a piece of paper and ask him to guess which numbers we’ve written down. We’ll take the usual precautions to make sure against other ways of finding out. If he guesses the first number correctly, of course we will all say ‘you’re a very lucky person, but I don’t believe you have ESP’. And if he guesses two numbers correctly, we’ll still say ‘you’re a very lucky person, but I still don’t believe you have ESP’. By the time he’s guessed four numbers correctly – well, I still wouldn’t believe it. So my state of belief is certainly lower than −40 db.

How many numbers would he have to guess correctly before you would really seriously consider the hypothesis that he has extrasensory perception? In my own case, I think somewhere around ten. My personal state of belief is, therefore, about −100 db. You could talk me into a ±10 db change, and perhaps as much as ±30 db, but not much more than that.

The idea is that after Mr. Smith guesses 7 to 13 numbers correctly (when by chance he should have a probability of 10% of guessing each one correctly), Jaynes will begin to think it reasonably likely that he has ESP. He notes that this is his subjective opinion, saying, “In my own case,” and “My personal state of belief.”

However, Jaynes follows this up by stating that if this happened in real life, he would not be convinced:

After further thought, we see that, although this result is correct, it is far from the whole story. In fact, if he guessed 1000 numbers correctly, I still would not believe that he has ESP, for an extension of the same reason that we noted in Chapter 4 when we first encountered the phenomenon of resurrection of dead hypotheses. An hypothesis A that starts out down at −100 db can hardly ever come to be believed, whatever the data, because there are almost sure to be alternative hypotheses (B1, B2,…) above it, perhaps down at −60 db. Then, when we obtain astonishing data that might have resurrected A, the alternatives will be resurrected instead.

In other words, Jaynes is saying, “This happened by chance,” and “Mr. Smith has ESP” are not the only possibilities. For example, it is possible that Mr. Smith has invented a remote MRI device, which he has trained to distinguish people’s thoughts about numbers, and he is receiving data on the numbers picked by means of an earbud. If the prior probability of this is higher than the prior probability that Mr. Smith has ESP, then Jaynes will begin to think this is a reasonable hypothesis, rather than coming to accept ESP.

This does not imply that Jaynes is infinitely confident that Mr. Smith does not have ESP, and in fact it does not invalidate his original estimate:

Now let us return to that original device of I. J. Good, which started this train of thought. After all this analysis, why do we still hold that naive first answer of −100 db for my prior probability for ESP, as recorded above, to be correct? Because Jack Good’s imaginary device can be applied to whatever state of knowledge we choose to imagine; it need not be the real one. If I knew that true ESP and pure chance were the only possibilities, then the device would apply and my assignment of −100 db would hold. But, knowing that there are other possibilities in the real world does not change my state of belief about ESP; so the figure of −100 db still holds.

He would begin to be convinced after about 10 numbers if he knew for a fact that chance and ESP were the only possibilities, and thus this is a good representation of how certain he is subjectively.

The fact of other possibilities also does not mean that it is impossible for Jaynes to be convinced, even in the real world, that some individual has ESP. But it does mean that this can happen only with great difficulty: essentially, he must be convinced that the other possibilities are even less likely than ESP. As Jaynes says,

Indeed, the very evidence which the ESP’ers throw at us to convince us, has the opposite effect on our state of belief; issuing reports of sensational data defeats its own purpose. For if the prior probability for deception is greater than that of ESP, then the more improbable the alleged data are on the null hypothesis of no deception and no ESP, the more strongly we are led to believe, not in ESP, but in deception. For this reason, the advocates of ESP (or any other marvel) will never succeed in persuading scientists that their phenomenon is real, until they learn how to eliminate the possibility of deception in the mind of the reader. As (5.15) shows, the reader’s total prior probability for deception by all mechanisms must be pushed down below that of ESP.

This is related to the grain of truth in Hume’s account of miracles. Hume’s basic point, that an account of a miracle could never be credible, is mistaken. But he is correct to say that the account would not be credible unless “these witnesses are mistaken or lying” has a lower prior probability than the prior probability of the miracle actually happening. His mistake is to suppose that this cannot happen in principle.

Something like this also happens with ordinary things that we are extremely sure about. For example, take your belief that the American War of Independence happened before the Civil War. You can imagine coming upon evidence that the Civil War happened first. Thus for example suppose you found a book by a historian arguing for this thesis. This would be evidence that the Civil War came first. But it would be very unpersuasive, and would change your mind little if at all, because the prior probability of “this is a work of fiction,” or indeed of “this a silly book arguing a silly thesis for personal reasons” is higher.

We could call this a “settled issue,” at least from your point of view (and in this case from the point of view of pretty much everyone). Not only do you believe that the War of Independence came first; it would be very difficult to persuade you otherwise, even if there were real evidence against your position, and this is not because you are being unreasonable. In fact, it would be unreasonable to be moved significantly by the evidence of that book arguing the priority of the Civil War.

Is it possible in principle to persuade you to change your mind? Yes. In principle this could happen bit by bit, by an accumulation of small pieces of evidence. You might read that book, and then learn that the author is a famous historian, and that he is completely serious (presumably he became famous before writing the book; otherwise he would instead be infamous.) And then you might find other items in favor of this theory, and find refutations of the apparently more likely explanations.

But in practice such a process is extremely unlikely. The most likely way you could change your mind about this would be by way of one large change. For example, you might wake up in a hospital tomorrow and be told that you had been suffering from a rare form of amnesia which does not remove a person’s past memories, but changes them into something different. You ask about the Civil War, and are told that everyone agrees that it happened before the War of Independence. People can easily give you dozens of books on the topic; you search online on the matter, and everything on the internet takes for granted that the Civil War came first. Likewise, everyone you talk to simply takes this for granted.

The reason that the “one big change” process is more likely than the “accumulation of small evidences” process is this: if we want to know what should persuade you that the Civil War came first, we are basically asking what the world would have to be like in order for it to be actually true that the Civil War came first. In such a world, your current belief is false. And in such a world it is simply much more likely that you have made one big mistake which resulted in your false belief about the Civil War, than that you have made lots of little mistakes which led to it.

 

Everything Proves It

G. K. Chesterton, in his book Orthodoxy, discusses the meaning of being “entirely convinced” of something:

It is very hard for a man to defend anything of which he is entirely convinced. It is comparatively easy when he is only partially convinced. He is partially convinced because he has found this or that proof of the thing, and he can expound it. But a man is not really convinced of a philosophic theory when he finds that something proves it. He is only really convinced when he finds that everything proves it. And the more converging reasons he finds pointing to this conviction, the more bewildered he is if asked suddenly to sum them up. Thus, if one asked an ordinary intelligent man, on the spur of the moment, “Why do you prefer civilisation to savagery?” he would look wildly round at object after object, and would only be able to answer vaguely, “Why, there is that bookcase … and the coals in the coal-scuttle … and pianos … and policemen.” The whole case for civilisation is that the case for it is complex. It has done so many things. But that very multiplicity of proof which ought to make reply overwhelming makes reply impossible.

We could think about this in terms of probability. The person who is “entirely convinced” would be like the person who assigns a probability of 100%, while someone who is “partially convinced” might assign a somewhat lower probability.

As Chesterton says, the person who assigns the lower probability has no difficulty defending his position. He can point to various things which he has found, arguments and evidence, that support his position.

But what about the person who assigns the probability of 100%? According to Chesterton, he is in difficulty because he finds that everything supports his position. And indeed, this is reasonable. For if some things support your position and some things do not, how could you suppose that there is no chance that you are mistaken? On the other hand, if you think that literally everything supports your position, you might well suppose that you cannot be mistaken about it.

Of course, as we have said many times on this blog, it is unreasonable in fact to claim such certainty, and it is unreasonable in fact to claim that everything supports your position. So being “entirely convinced” in Chesterton’s sense here is a bad thing, not a good thing.

Chesterton goes on to apply this to his belief in Catholicism:

There is, therefore, about all complete conviction a kind of huge helplessness. The belief is so big that it takes a long time to get it into action. And this hesitation chiefly arises, oddly enough, from an indifference about where one should begin. All roads lead to Rome; which is one reason why many people never get there. In the case of this defence of the Christian conviction I confess that I would as soon begin the argument with one thing as another; I would begin it with a turnip or a taximeter cab.

This is not a good thing, for the above reasons. We could describe this situation in another way. We saw in the previous post that consistent testimony where there should be inconsistent testimony leads to a weakening of the evidence. If a dozen eyewitnesses agree in every respect, this is not good evidence for their claim, but good evidence that they are collaborating. In a similar way, if it seems to you that “everything proves it,” this is very good evidence that you are incapable of distinguishing between things that support your position and things that do not.

This also provides a fuller explanation for the fact that the person who is entirely convinced in Chesterton’s sense finds it difficult to argue for his position. Chesterton’s point that when there are too many possibilities, it is difficult to choose one of them, has some validity. But more fundamentally, the person who is entirely convinced in this way is not even engaging in reasonable argument in the first place; while the person who is partially convinced at least has the possibility of engaging in this kind of argument.