In an earlier post I discussed Robert Aumann’s mathematical theorem demonstrating that people with common priors who have common knowledge of their opinions cannot disagree about the probability of any fact.
As I said at the time, real human beings do not have a prior probability distribution, and thus the theorem cannot apply to them strictly speaking. To the degree that people do have a prior, that prior can differ from person to person.
A person’s prior can also be modified, something which is not meant to happen to a prior understood in the mathematical sense of Aumann’s paper. We can see this by means of a thought experiment, even if the thought experiment itself cannot happen in real life. Suppose you are given a machine that works like this: you can ask the machine whether some statement is true. It has a 100% chance of printing out a 1 if the statement is in fact true. If the statement is false, it has a 10% chance of printing a 1, and a 90% chance of printing a 0. You are allowed to repeat the question, with the responses having the same probability each time.
Thus if you ask about a false statement, it will have a 10% chance of printing a 1. It will have a 1% chance of printing 1 twice in a row, and a 0.1% chance of printing a 1 three times in a row.
Suppose you ask the question, “Are the Chronicles of Narnia a completely accurate historical account of something that really happened to various children from England?”
The machine outputs a 1. So you ask again. You get another 1. Let’s say this happens 10 times. The probability that this happens this many times with a false statement is one in ten billion.
In real life you would conclude that a machine that did this does not work as stated. But in our thought experiment, you know with absolute certainty that it does work as stated. So you almost certainly will conclude that the Chronicles of Narnia is an accurate historical account. The same will be true pretty much no matter what statement you test, given this result.
But it would be easy to compose far more than 10 billion mutually inconsistent statements. Thus it is logically inconsistent to assign a probability of more than one in ten billion to all such statements. So if you had a consistent and full prior distribution that you were prepared to stick to, then there should be some such statements which you will still believe to be false even after getting a 1 ten times from the machine. This proves that we do not have such a prior: the fact that the machine comes out this way tells us that we should admit that the prior for the particular statement that we are testing should be high enough to accept after the machine’s result. So for example we might think that the actual probability of the Chronicles of Narnia being an accurate historical account is less than one in ten billion. But if we are given the machine and get this result, we will change our mind about the original probability of the claim, in order to justify accepting it as true in those circumstances.
If someone disagrees with the above thought experiment, he can change the 10 to 20, or to whatever is necessary.
Although Aumann’s result depends on unchanging priors, in practice the fact that we can change our priors in this way makes his result apply more to human disagreements than it would in a situation where we had unchanging priors, but still diverse from other people’s priors.
Robin Hanson has published an extension of Aumann’s result, taking into account the fact that people have different priors and can reason about the origin of these priors. By stipulating certain conditions of rationality (just as Aumann does), he can get the result that a disagreement between two people will only be reasonable if they disagree about the origin of their priors, and in a particular way:
This paper presents a theoretical framework in which agents can hold probabilistic beliefs about the origins of their priors, and uses this framework to consider how such beliefs might constrain the rationality of priors. The basic approach is to embed a set of standard models within a larger encompassing standard model. Each embedded model differs only in which agents have which priors, while the larger encompassing model includes beliefs about which possible prior combinations might be realized.
Just as beliefs in a standard model depends on ordinary priors, beliefs in the larger model depend on pre-priors. We do not require that these pre-priors be common; pre-priors can vary. But to keep priors and pre-priors as consistent as possible with each other, we impose a pre-rationality condition. This condition in essence requires that each agent’s ordinary prior be obtained by updating his pre-prior on the fact that nature assigned the agents certain particular priors.
This pre-rationality condition has strong implications regarding the rationality of uncommon priors. Consider, for example, two astronomers who disagree about whether the universe is open (and infinite) or closed (and finite). Assume that they are both aware of the same relevant cosmological data, and that they try to be Bayesians, and therefore want to attribute their difference of opinion to differing priors about the size of the universe.
This paper shows that neither astronomer can believe that, regardless of the size of the universe, nature was equally likely to have switched their priors. Each astronomer must instead believe that his prior would only have favored a smaller universe in situations where a smaller universe was actually more likely. Furthermore, he must believe that the other astronomer’s prior would not track the actual size of the universe in this way; other priors can only track universe size indirectly, by tracking his prior. Thus each person must believe that prior origination processes make his prior more correlated with reality than others’ priors.
Despite the fact that Hanson’s result, like Aumann’s, is based on a particular mathematical analysis which remains much more rigid than real life, and in this sense cannot apply strictly to real life, it is not difficult to see that it does have strong analogies in real human disagreements. Thus for example, suppose a Christian believes that Christianity has a 98% chance of being true, and Islam a 1% chance. A Muslim, with whom he disagrees, believes that Islam has a 98% chance of being true, and Christianity a 1% chance. If they each believe, “Both of us believe in our religions because that is the one in which we were raised,” it is obvious that this disagreement is not reasonable. In order for each of them to be reasonable, they need to disagree about why they believe what they believe. Thus for example one might think, “He believes in his religion because he was raised in it, while I believe in mine because of careful and intelligent analysis of the facts.” The other obviously will disagree with this.
This particular example, of course, does not take into account the fact that belonging to a religion is not a matter of a particular claim, nor the fact that beliefs are voluntary, and both of these affect such a question in real life.
Nonetheless, this kind of disagreement about the origins of our beliefs is clearly a common phenomena in situations where we have a persistent disagreement with someone. In the end each person tends to attribute a particular source to the other person’s opinion, and a different source to his own, one which is much more likely to make his own opinion correct. But all of the same things should apply to these differing opinions about the origins of their beliefs. This suggests that in fact persistent disagreements are usually unreasonable. This corresponds to how people treat them. Once a disagreement is clearly persistent, and clearly will not be resolved by any amount of discussion, people think that the other person is being stubborn and unreasonable.
And in fact, it is very likely that one or both of the two is being stubborn and unreasonable. This will feel pretty much the same from each side, however; thus the fact that it feels to you like the other person is being stubborn and unreasonable, is not a good reason for thinking that this is actually the case. He is very likely to feel the same way about you. This will happen no matter who is actually responsible. Most often both partners contribute to it, since no one is actually perfectly reasonable.
The fact that belief is voluntary can be a mitigating factor here, if people recognize the moral influences on their beliefs. Thus for example the Christian and the Muslim in the above example could simply say, “It is not necessarily that I am more likely to be right, but I choose to believe this rather than that, for these personal reasons.” And in that case in principle they might agree on the probability of the truth of Christian and Islamic doctrines, and nonetheless reasonably hold different beliefs, on account of moral considerations that apply to them in particular.
The fact that people do not like to admit that they are wrong is a reason for a particular approach to disagreement. In the last post, we discussed the fact that since words and thoughts are vague, the particular content of a person’s assertions is not entirely determinate. They may be true in some ways, and not true in others, and the person himself may not be considering in which way he is making the claim. So it is much more productive to interpret the person’s words in the way that contains as much truth as possible. We have talked about this elsewhere. Such an understanding is probably a better understanding of the person in the first place. And it allows him to agree with you while excluding the false interpretations, and without saying, “I was wrong.” And yet he learns from this, because his original statement was in fact open to the false interpretations. There is nothing deceptive about this; our words and beliefs are in fact vague in this way and allow for this sort of learning. And cooperating in this way in a discussion will be mutually profitable. Since absolute precision is not possible, in general there is no one who has nothing at all to learn from another.