Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount of trade, if you desire both truth and other things. Eliezer Yudkowsky argues that self-deception, and therefore also such trade, is either impossible or stupid, depending on how it is understood:

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.”  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

A better attitude to this matter is adopted by Robin Hanson, as for example when he discusses motives for having opinions in a post which we previously considered here. Bryan Caplan has a similar view, discussed here.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.

Advertisements

Do I Really Want To Know?

Some days ago I asked how we can determine whether we really love the truth or not. Bryan’s Caplan’s account of preferences over beliefs and rational irrationality indicates there may be an additional impediment to answering this question correctly, besides the factors mentioned in the first post. I may care more or less about the truth about various issues, especially depending on how they relate with other things I care about. Now consider the difference between “I have a deep love for the truth,” and “I don’t care much about the truth.”

For most people, the former statement is likely to appear attractive, and the latter unattractive. Let’s suppose we are trying to determine which one is actually true. If the first one is true, then we would care about the truth about ourselves, and we would make a decent effort to determine the truth, presumably arriving at the conclusion that the first is true (since it is true by hypothesis.)

But suppose the second is true. In that case, we are unlikely to make a great effort to determine the actual truth. Instead, we are likely to believe the more attractive opinion, namely the first, unless the costs of believing this are too high.

In principle, believing that I have a deep love for truth when in fact I do not could have a very high cost indeed. But in practice this would be by a very circuitous route, and frequently the costs would not be immediate or apparent in any way. Consequently someone who does not care much about the truth is likely to believe that he does care a lot, and is only likely to change his mind when the costs of his error become apparent, just like the person who becomes uncertain when he is offered a bet. Under normal circumstances, then, most people will hold the first belief, regardless of whether the first or the second is actually true.

 

Rational Irrationality

After giving reasons for thinking that people have preferences over beliefs, Bryan Caplan presents his model of rational irrationality, namely the factors that determine whether or not people give in to such preferences or resist them.

In extreme cases, mistaken beliefs are fatal. A baby-proofed house illustrates many errors that adults cannot afford to make. It is dangerous to think that poisonous substances are candy. It is dangerous to reject the theory of gravity at the top of the stairs. It is dangerous to hold that sticking forks in electrical sockets is harmless fun.

But false beliefs do not have to be deadly to be costly. If the price of oranges is 50 cents each, but you mistakenly believe it is a dollar, you buy too few oranges. If bottled water is, contrary to your impressions, neither healthier nor better-tasting than tap water, you may throw hundreds of dollars down the drain. If your chance of getting an academic job is lower than you guess, you could waste your twenties in a dead-end Ph.D. program.

The cost of error varies with the belief and the believer’s situation. For some people, the belief that the American Civil War came before the American Revolution would be a costly mistake. A history student might fail his exam, a history professor ruin his professional reputation, a Civil War reenactor lose his friends’ respect, a public figure face damaging ridicule.

Normally, however, a firewall stands between this mistake and “real life.” Historical errors are rarely an obstacle to wealth, happiness, descendants, or any standard metric of success. The same goes for philosophy, religion, astronomy, geology, and other “impractical” subjects. The point is not that there is no objectively true answer in these fields. The Revolution really did precede the Civil War. But your optimal course of action if the Revolution came first is identical to your optimal course if the Revolution came second.

To take another example: Think about your average day. What would you do differently if you believed that the earth began in 4004 B.C., as Bishop Ussher infamously maintained? You would still get out of bed, drive to work, eat lunch, go home, have dinner, watch TV, and go to sleep. Ussher’s mistake is cheap.

Virtually the only way that mistakes on these questions injure you is via their social consequences. A lone man on a desert island could maintain practically any historical view with perfect safety. When another person washes up, however, there is a small chance that odd historical views will reduce his respect for his fellow islander, impeding cooperation. Notice, however, that the danger is deviance, not error. If everyone else has sensible historical views, and you do not, your status may fall. But the same holds if everyone else has bizarre historical views and they catch you scoffing.

To use economic jargon, the private cost of an action can be negligible, though its social cost is high. Air pollution is the textbook example. When you drive, you make the air you breathe worse. But the effect is barely perceptible. Your willingness to eliminate your own emissions might be a tenth of a cent. That is the private cost of your pollution. But suppose that you had the same impact on the air of 999,999 strangers. Each disvalues your emissions by a tenth of a cent too. The social cost of your activity—the harm to everyone including yourself—is $1,000, a million times the private cost.

Caplan thus makes the general points that our beliefs on many topics cannot hurt us directly, and frequently can do so only by means of social consequences. He adds the final point that the private cost of an action—or in this case a belief—may be very different from the total cost.

Finally, Caplan presents his economic model of rational irrationality:

Two forces lie at the heart of economic models of choice: preferences and prices. A consumer’s preferences determine the shape of his demand curve for oranges; the market price he faces determines where along that demand curve he resides. What makes this insight deep is its generality. Economists use it to analyze everything from having babies to robbing banks.

Irrationality is a glaring exception. Recognizing irrationality is typically equated with rejecting economics. A “logic of the irrational” sounds self-contradictory. This chapter’s central message is that this reaction is premature. Economics can handle irrationality the same way it handles everything: preferences and prices. As I have already pointed out:

  • People have preferences over beliefs: A nationalist enjoys the belief that foreign-made products are overpriced junk; a surgeon takes pride in the belief that he operates well while drunk.
  • False beliefs range in material cost from free to enormous: Acting on his beliefs would lead the nationalist to overpay for inferior goods, and the surgeon to destroy his career.

Snapping these two building blocks together leads to a simple model of irrational conviction. If agents care about both material wealth and irrational beliefs, then as the price of casting reason aside rises, agents consume less irrationality. I might like to hold comforting beliefs across the board, but it costs too much. Living in a Pollyanna dreamworld would stop be from coping with my problems, like that dead tree in my backyard that looks like it is going to fall on my house.

As I said in the last post, one reason why people argue against such a view is that it can seem psychologically implausible. Caplan takes notes of the same fact:

Arguably the main reason why economists have not long since adopted an approach like mine is that it seems psychologically implausible. Rational irrationality appears to map an odd route to delusion:

Step 1: Figure out the truth to the best of your ability.

Step 2: Weigh the psychological benefits of rejecting the truth against its material costs.

Step 3: If the psychological benefits outweigh the material costs, purge the truth from your mind and embrace error.

The psychological plausibility of this stilted story is underrated.

Of course, this process is not so conscious and explicit in reality, and this is why the above seems so implausible. Caplan presents the more realistic version:

But rational irrationality does not require Orwellian underpinnings. The psychological interpretation can be seriously toned down without changing the model. Above all, the steps should be conceived as tacit. To get in your car and drive away entails a long series of steps—take out your keys, unlock and open the door, sit down, put the key in the ignition, and so on. The thought processes behind these steps are rarely explicit. Yet we know the steps on some level, because when we observe a would-be driver who fails to take one—by, say, trying to open a locked door without using his key—it is easy to state which step he skipped.

Once we recognize that cognitive “steps” are usually tacit, we can enhance the introspective credibility of the steps themselves. The process of irrationality can be recast:

Step 1: Be rational on topics where you have no emotional attachment to a particular answer.

Step 2: On topics where you have an emotional attachment to a particular answer, keep a “lookout” for questions where false beliefs imply a substantial material cost for you.

Step 3: If you pay no substantial material costs of error, go with the flow; believe whatever makes you feel best.

Step 4: If there are substantial material costs of error, raise your level of intellectual self-discipline in order to become more objective.

Step 5: Balance the emotional trauma of heightened objectivity—the progressive shattering of your comforting illusions—against the material costs of error.

There is no need to posit that people start with a clear perception of the truth, then throw it away. The only requirement is that rationality remain on “standby,” ready to engage when error is dangerous.

Caplan offers various examples of this process happening in practice. I will include here only the last example:

Want to bet? We encounter the price-sensitivity of irrationality whenever someone unexpectedly offers us a bet based on our professed beliefs. Suppose you insist that poverty in the Third World is sure to get worse in the next decade. A challenger immediately retorts, “Want to bet? If you’re really ‘sure,’ you won’t mind giving me ten-to-one odds.” Why are you unlikely to accept this wager? Perhaps you never believed your own words; your statements were poetry—or lies. But it is implausible to tar all reluctance to bet with insincerity. People often believe that their assertions are true until you make them “put up or shut up.” A bet moderates their views—that is, changes their minds—whether or not they retract their words.

Bryan Caplan’s account is very closely related to what I have argued elsewhere, namely that people are more influenced by non-truth-related motives in areas remote from the senses. Caplan’s account explains that a large part of the reason for this is simply that being mistaken is less harmful in these areas (at least in a material sense), and consequently that people care less about whether their views in these areas are true, and care more about other factors. This also explains why the person who is offered a bet in the example changes his mind: this is not simply explained by whether or not the truth of the matter can be determined by sensible experience, but by whether a mistaken opinion in this particular case is likely to cause harm or not.

Nonetheless, even if you do care about truth because error can harm you, this too is a love of sweetness, not of truth.

Bryan Caplan on Preferences Over Beliefs

Responding to the criticism mentioned in the previous post, Caplan begins by noting that it is quite possible to observe preferences:

I observe one person’s preferences every day—mine. Within its sphere I trust my introspection more than I could ever trust the work of another economist. Introspection tells me that I am getting hungry, and would be happy to pay a dollar for an ice cream bar. If anything qualifies as “raw data,” this does. Indeed, it is harder to doubt than “raw data” that economists routinely accept—like self-reported earnings.

One thing my introspection tells me is that some beliefs are more emotionally appealing than their opposites. For example, I like to believe that I am right. It is worse to admit error, or lose money because of error, but error is disturbing all by itself. Having these feelings does not imply that I indulge them—no more than accepting money from a source with an agenda implies that my writings are insincere. But the temptation is there.

After this discussion of his own experience, he considers the experience of others:

Introspection is a fine way to learn about your own preferences. But what about the preferences of others? Perhaps you are so abnormal that it is utterly misleading to extrapolate from yourself to the rest of humanity. The simplest way to check is to listen to what other people say about their preferences.

I was once at a dinner with Gary Becker where he scoffed at this idea. His position, roughly, was, “You can’t believe what people say,” though he still paid attention when the waiter named the house specialties. Yes, there is a sound core to Becker’s position. People fail to reflect carefully. People deceive. But contrary to Becker, these are not reasons to ignore their words. We should put less weight on testimony when people speak in haste, or have an incentive to lie. But listening remains more informative than plugging your ears. After all, human beings can detect lies as well as tell them. Experimental psychology documents that liars sometimes give themselves away with demeanor or inconsistencies in their stories.

Once we take the testimony of mankind seriously, evidence of preferences over beliefs abounds. People can’t shut up about them. Consider the words of philosopher George Berkeley:

“I can easily overlook any present momentary sorrow when I reflect that it is in my power to be happy a thousand years hence. If it were not for this thought I had rather be an oyster than a man.”

Paul Samuelson himself revels in the Keynesian revelation, approvingly quoting Wordsworth to capture the joy of the General Theory: “Bliss was it in that dawn to be alive, but to be young was very heaven!”

Many autobiographies describe the pain of abandoning the ideas that once gave meaning to the author’s life. As Whittaker Chambers puts it:

“So great an effort, quite apart from its physical and practical hazards, cannot occur without a profound upheaval of the spirit. No man lightly reverses the faith of an adult lifetime, held implacably to the point of criminality. He reverses it only with a violence greater than the faith he is repudiating.”

No wonder that—in his own words—Chambers broke with Communism “slowly, reluctantly, in agony.” For Arthur Koestler, deconversion was “emotional harakiri.” He adds, “Those who have been caught by the great illusion of our time, and have lived though its moral and intellectual debauch, either give themselves up to a new addiction of the opposite type, or are condemned to pay with a lifelong hangover.” Richard Write laments, “I knew in my heart that I should never be able to feel with that simple sharpness about life, should never again express such passionate hope, should never again make so total a commitment of faith.”

The desire for “hope and illusion” plays a role even in mental illness. According to his biographer, Nobel Prize winner and paranoid schizophrenic John Nash often preferred his fantasy world—where he was a “Messianic godlike figure”—to harsh reality:

“For Nash, the recovery of everyday thought processes produced a sense of diminution and loss…. He refers to his remissions not as joyful returns to a healthy state, but as ‘interludes, as it were, of enforced rationality.'”

One criticism here might go as follows. Yes, Caplan has done a fine job of showing that people find some beliefs attractive and others unattractive, that some beliefs make them happy and some unhappy. But like C.S. Lewis, one can argue that this does not imply that this is why they hold those beliefs. It is likely enough that they have some real reasons as well, and this means that their preferences are irrelevant.

One basis for this objection is probably the idea that sitting down and choosing to believe something seems psychologically implausible. But it does not have to happen so explicitly, even though this is more possible than people might think. The fact that such preferences can be felt as “temptations,” as Caplan puts it in describing his own experience, is an indication that it is entirely possible to give in to the temptation or to resist it, and thus that we can choose our beliefs in effect, even if this is not an explicit thought.

We could compare such situations to the situation of someone addicted to smoking or drinking. Let’s suppose they are trying to get over it, but constantly falling back into the behavior. It may be psychologically implausible to assert, “He says he wants to get over it, but he is just faking. He actually prefers to remain addicted.” But this does not change the fact that every time he goes to the store to buy cigarettes, every time he takes one out to light it, every time he steps outside for a smoke, he exercises his power of choice. In the same way, we determine our beliefs by concrete choices, even though in many cases the idea that the person could have simply decided to choose the opposite belief may be implausible. I have discussed this kind of thing earlier, as for example here. When we are engaged in an argument with someone, and they seem to be getting the better of the argument, it is one choice if we say, “You’re probably right,” and another choice if we say, “You’re just wrong, but you’re clearly incapable of understanding the truth of the matter…” In any case it is certainly a choice, even if it does not feel like one, just as the smoker or the alcoholic may not feel like he has a choice about smoking and drinking.

Caplan has a last consideration:

If neither way of verifying the existence of preferences over beliefs appeals to you, a final one remains. Reverse the direction of reasoning. Smoke usually means fire. The more bizarre a mistake is, the harder it is to attribute to lack of information. Suppose your friend thinks he is Napoleon. It is conceivable that he got an improbable coincidence of misleading signals sufficient to convince any of us. But it is awfully suspicious that he embraces the pleasant view that he is a world-historic figure, rather than, say, Napoleon’s dishwasher. Similarly, suppose an adult sees trade as a zero-sum game. Since he experiences the opposite every day, it is hard to blame his mistake on “lack of information.” More plausibly, like blaming your team’s defeat on cheaters, seeing trade as disguised exploitation soothes those who dislike the market’s outcome.

It is unlikely that Bryan Caplan means to say your friend here is wicked rather than insane. Clearly someone living in the present who believes that he is Napoleon is insane, in the sense that his mind is not working normally. But Caplan’s point is that you cannot simply say, “His mind is not working normally, and therefore he holds an arbitrary belief with no relationship with reality,” but instead he holds a belief which includes something which many people would like to think, namely, “I am a famous and important person,” but which most ordinary people do not in fact think, because it is obviously false (in most cases.) So one way that the person’s mind works differently is that reality doesn’t have as much power to prevent him from holding attractive beliefs as for normal people, much like the case of John Nash as described by Caplan. But the fact that some beliefs are attractive is not a way in which he differs. It is a way in which he is like all of us.

The point about trade is that everyone who buys something at a store believes that he is making himself better off by his purchase, and knows that he makes the store better off as well. So someone who says that trade is zero-sum is contradicting this obvious fact; his claim cannot be due to a lack of evidence regarding the mutual utility of trade.

Love of Truth and Love of Self

Love of self is natural and can extend to almost any aspect of ourselves, including to our beliefs. In other words, we tend to love our beliefs because they are ours. This is a kind of “sweetnesss“. As suggested in the linked post, since we believe that our beliefs are true, it is not easy to distinguish between loving our beliefs for the sake of truth, and loving them because they are ours. But these are two different things: the first is the love of truth, and the second is an aspect of love of self.

Just as we love ourselves, we love the wholes of which we are parts: our family, our country, our religious communities, and so on. These are better than pure love of self, but they too can represent a kind of sweetness: if we love of our beliefs because they are the beliefs of our family, of our friends, of our religious and political communities, or because they are part of our worldview, none of these things is the love of truth, whether or not the beliefs are actually true.

This raises two questions: first, how do we know whether we are acting out of the love of truth, or out of some other love? And second, if there is a way to answer the first question, what can we do about it?

These questions are closely related to a frequent theme of this blog, namely voluntary beliefs, and the motives for these beliefs. Bryan Caplan, in his book The Myth of the Rational Voter, discusses these things under the name of “preferences over beliefs”:

The desire for truth can clash with other motives. Material self-interest is the leading suspect. We distrust salesmen because they make more money if they shade the truth. In markets for ideas, similarly, people often accuse their opponents of being “bought,” their judgment corrupted by a flow of income that would dry up if they changed their minds. Dasgupta and Stiglitz deride the free-market critique of antitrust policy as “well-funded” but “not well-founded.” Some accept funding from interested parties, then bluntly speak their minds anyway. The temptation, however, is to balance being right and being rich.

Social pressure for conformity is another force that conflicts with truth-seeking. Espousing unpopular views often transforms you into an unpopular person. Few want to be pariahs, so they self-censor. If pariahs are less likely to be hired, conformity blends into conflict of interest. However, even bereft of financial consequences, who wants to be hated? The temptation is to balance being right and being liked.

But greed and conformism are not the only forces at war with truth. Human beings also have mixed cognitive motives. One of our goals is to reach correct answers in order to take appropriate action, but that is not the only goal of our thought. On many topics, one position is more comforting, flattering, or exciting, raising the danger that our judgment will be corrupted not by money or social approval, but by our own passions.

Even on a desert isle, some beliefs make us feel better about ourselves. Gustave Le Bon refers to “that portion of hope and illusion without which [men] cannot live.” Religion is the most obvious example. Since it is often considered rude to call attention to the fact, let Gaetano Mosca make the point for me:

“The Christian must be enabled to think with complacency that everybody not of the Christian faith will be damned. The Brahman must be given grounds for rejoicing that he alone is descended from the head of Brahma and has the exalted honor of reading the sacred books. The Buddhist must be taught highly to prize the privilege he has of attaining Nirvana soonest. The Mohammedan must recall with satisfaction that he alone is a true believer, and that all others are infidel dogs in this life and tormented dogs in the next. The radical socialist must be convinced that all who do not think as he does are either selfish, money-spoiled bourgeois or ignorant and servile simpletons. These are all examples of arguments that provide for one’s need of esteeming one’s self and one’s own religion or convictions and at the same time for the need of despising and hating others.”

Worldviews are more a mental security blanket than a serious effort
to understand the world: “Illusions endure because illusion is a need
for almost all men, a need they feel no less strongly than their material needs.” Modern empirical work suggests that Mosca was on to something: The religious consistently enjoy greater life satisfaction. No wonder human beings shield their beliefs from criticism, and cling to them if counterevidence seeps through their defenses.

Most people find the existence of mixed cognitive motives so obvious
that “proof” is superfluous. Jost and his coauthors casually remark in the Psychological Bulletin that “Nearly everyone is aware of the possibility that people are capable of believing what they want to believe, at least within certain limits.” But my fellow economists are unlikely to sign off so easily. If one economist tells another, “Your economics is just a religion,” the allegedly religious economist normally takes the distinction between “emotional ideologue” and “dispassionate scholar” for granted, and paints himself as the latter. But when I assert the generic existence of preferences over beliefs, many economists challenge the whole category. How do I know preferences over beliefs exist? Some eminent economists imply that this is impossible to know because preferences are unobservable.

This is very similar to points that I have made from time to time on this blog. Like Caplan, I consider the fact that beliefs have a voluntary character, at least up to a certain point, to be virtually obvious. Likewise, Caplan points out that in the midst of a discussion an economist may take for granted the idea of the “emotional ideologue,” namely someone whose beliefs are motivated by emotions, but frequently he will not concede the point in generic terms. In a similar way, people in general constantly recognize the influence of motives on beliefs in particular cases, especially in regard to other people, but they frequently fight against the concept in general. C.S. Lewis is one example, although he does concede the point to some extent.

In the next post I will look at Caplan’s response to the economists, and at some point after that bring the discussion back to the question about the love of truth.