Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount of trade, if you desire both truth and other things. Eliezer Yudkowsky argues that self-deception, and therefore also such trade, is either impossible or stupid, depending on how it is understood:

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.”  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

A better attitude to this matter is adopted by Robin Hanson, as for example when he discusses motives for having opinions in a post which we previously considered here. Bryan Caplan has a similar view, discussed here.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.

Advertisements

Vaguely Trading Away Truth

Robin Hanson asks his readers about religion:

Consider two facts:

  1. People with religious beliefs, and associated behavior, consistently tend to have better lives. It seems that religious folks tend to be happier, live longer, smoke less, exercise more, earn more, get and stay married more, commit less crime, use less illegal drugs, have more social connections, donate and volunteer more, and have more kids. Yes, the correlation between religion and these good things is in part because good people tend to become more religious, but it is probably also in part because religious people tend to become better. So if you want to become good in these ways, an obvious strategy is to become more religious, which is helped by having more religious beliefs.
  2. Your far beliefs, such as on religion and politics, can’t effect your life much except via how they effect your behavior, and your associates’ opinions of you. When you think about cosmology, ancient Rome, the nature of world government, or starving folks in Africa, it might feel like those things matter to you. But in terms of the kinds of things that evolution could plausibly have built you to actually care about (vs. pretend to care about), those far things just can’t directly matter much to your life. While your beliefs about far things might influence how you act, and what other people think of you, their effects on your quality of life, via such channels of influence, don’t depend much on whether these beliefs are true.

Perhaps, like me, you find religious beliefs about Gods, spirits, etc. to be insufficiently supported by evidence, coherence, or simplicity to be a likely approximation to the truth. Even so, ask yourself: why care so much about truth? Yes, you probably think you care about believing truth – but isn’t it more plausible that you mainly care about thinking you like truth? Doesn’t that have a more plausible evolutionary origin than actually caring about far truth?

Yes, there are near practical areas of your life where truth can matter a lot. But most religious people manage to partition their beliefs, so their religious beliefs don’t much pollute their practical beliefs. And this doesn’t even seem to require much effort on their part. Why not expect that you could do similarly?

Yes, it might seem hard to get yourself to believe things that seem implausible to you at the moment, but we humans have lots of well-used ways to get ourselves to believe things we want to believe. Are you willing to start trying those techniques on this topic?

Now, a few unusual people might have an unusually large influence on far topics, and to those people truth about far topics might plausibly matter more to their personal lives, and to things that evolution might plausibly have wanted them to directly care about. For example, if you were king of the world, maybe you’d reasonably care more about what happens to the world as a whole.

But really, what are the chances that you are actually such a person? And if not, why not try to be more religious?

Look, Robin is saying, maybe you think that religions aren’t true. But the fact is that it isn’t very plausible that you care that much about truth anyway. So why not be religious anyway, regardless of the truth, since there are known benefits to this?

A few days after the above post, Robin points out some evidence that stories tend to distort a person’s beliefs about the world, and then says:

A few days ago I asked why not become religious, if it will give you a better life, even if the evidence for religious beliefs is weak? Commenters eagerly declared their love of truth. Today I’ll ask: if you give up the benefits of religion, because you love far truth, why not also give up stories, to gain even more far truth? Alas, I expect that few who claim to give up religion because they love truth will also give up stories for the same reason. Why?

One obvious explanation: many of you live in subcultures where being religious is low status, but loving stories is high status. Maybe you care a lot less about far truth than you do about status.

We have discussed in an earlier post some of the reasons why stories can distort a person’s opinions about the world.

It is very plausible to me that Robin’s proposed explanation, namely status seeking, does indeed exercise a great deal of influence among his target audience. But this would not tend to be a very conscious process, and would likely be expressed consciously in other ways. A more likely conscious explanation would be this representative comment from one of Robin’s readers:

There is a clear difference in choosing to be religious and choosing to partake in a story. By being religious, you profess belief in some set of ideas on the nature of the world. If you read a fictional story, there is no belief. Religions are supposed to be taken as fact. It is non-fiction, whether it’s true or not. Fictional stories are known to not be true. You don’t sacrifice any of a love for truth as you’ve put it by digesting the contents of a fictional story, because none of the events of the story are taken as fact, whereas religious texts are to be taken as fact. Aristotle once said, “It is the mark of an educated mind to be able to entertain a thought without accepting it.” When reading fictional stories, you know that the events aren’t real, but entertain the circumstances created in the story to be able to increase our understanding of ourselves, others, and the world. This is the point of the stories, and they thereby aid in the search for truth, as we have to ask ourselves questions about how we would relate in similar situations. The authors own ideas shown in the story may not be what you personally believe in, but the educated mind can entertain the ideas and not believe in them, increasing our knowledge of the truth by opening ourselves up to others viewpoints. Religions are made to be believed without any real semblance of proof, there is no entertaining the idea, only acceptance of it. This is where truth falls out the window, as where there is no proof, the truth cannot be ascertained.

The basic argument would be that if a non-religious person simply decides to be religious, he is choosing to believe something he thinks to be false, which is against the love of truth. But if the person reads a story, he is not choosing to believe anything he thinks to be false, so he is not going against the love of truth.

For Robin, the two situations are roughly equivalent, because there are known reasons why reading fiction will distort one’s beliefs about the world, even if we do not know in advance the particular false beliefs we will end up adopting, or the particular false beliefs that we will end up thinking more likely, or the true beliefs that we might lose or consider less likely.

But there is in fact a difference. This is more or less the difference between accepting the real world and accepting the world of Omelas. In both cases evils are accepted, but in one case they are accepted vaguely, and in the other clearly and directly. In a similar way, it would be difficult for a person to say, “I am going to start believing this thing which I currently think to be false, in order to get some benefit from it,” and much easier to say, “I will do this thing which will likely distort my beliefs in some vague way, in order to get some benefit from it.”

When accepting evil for the sake of good, we are more inclined to do it in this vague way in general. But this is even more the case when we trade away truth in particular for the sake of other things. In part this is precisely because of the more apparent absurdity of saying, “I will accept the false as true for the sake of some benefit,” although Socrates would likely respond that it would be equally absurd to say, “I will do the evil as though it were good for the sake of some benefit.”

Another reason why this is more likely, however, is that it is easier for a person to tell himself that he is not giving up any truth at all; thus the author of the comment quoted above asserted that reading fiction does not lead to any false beliefs whatsoever. This is related to what I said in the post here: trading the truth for something else, even vaguely, implies less love of truth than refusing the trade, and consequently the person may not care enough to accurately discern whether or not they are losing any truth.

Little Things

Chapter 39 of Josemaria Escriva’s book The Way concerns the topic of “little things.” The whole chapter, and really the whole book, is worth reading. The text is composed in the form of a set of aphorisms, much like Francis Bacon’s work. I will quote two passages in particular from the chapter in question:

823. Have you seen how that imposing building was built? One brick upon another. Thousands. But, one by one. And bags of cement, one by one. And blocks of stone, each of them insignificant compared with the massive whole. And beams of steel. And men working, the same hours, day after day…

Have you seen how that imposing building was built?… By dint of little things!

826. Everything in which we poor men have a part — even holiness — is a fabric of small trifles which, depending upon one’s intention, can form a magnificent tapestry of heroism or of degradation, of virtues or of sins.

The epic legends always relate extraordinary adventures, but never fail to mix them with homely details about the hero. — May you always attach great importance to the little things. This is the way!

The second passage asserts that anything great in human life is essentially composed of “small trifles.” The first passage explains why this is so. The world is an ordered place, and one of the orders found in it is the order of material causality. Since the whole is greater than the part, it follows that great wholes are ultimately composed of little parts, or in other words, “small trifles.”

We often tend not to notice this in relation to human life, because we think of life as a kind of story, and it is normal for stories to leave out all sorts of detail, in order to concentrate on the overall picture. But all of that detail is always present: every day is made up of 24 hours, and everything we do ultimately is made up of individual immediate actions.

Thus Escriva says that we should “always attach great importance to the little things,” because there is no other way to accomplish anything. For example, someone might be assigned a paper in school, and find himself unable to write the paper, because he is constantly thinking of the need to “write a paper.” But “writing a paper” is not an action that can be chosen; it is just not a thing that can be done immediately. And unless it is first broken down into “little things,” it will never be done at all. This is one of the main causes of procrastination in people’s lives, namely failing to see that the larger goals that they wish to accomplish must be accomplished by means of little things, through individual actions. Thus someone might say, “I don’t know why, but I never feel like writing the paper.” But in fact he does not feel like writing it, because he has not yet presented himself with any option that can ever be chosen.

 

Story of Your Life

In principle, people could live and act for an end without attempting to fit their lives into the structure of a particular narrative, apart from the general narrative of acting for an end. In practice, however, people feel a need to understand their lives in terms of much more concrete narratives, in other words as though a person’s life were a kind of story.

This happens first of all with some kind of overarching narrative regarding human life in general, and perhaps the rest of the universe as well. Thus for example we saw Eric Reitan argue that people have a need to believe that apparently random destructive events in their lives have a deeper meaning.

Second, people conceive of their lives as a particular story, one in which they are the protagonist, and to a certain degree the narrator as well.

This conception is correct to some degree, but incorrect if it is taken to an extreme. You are in a certain way the main character of your “story,” insofar as your knowledge is naturally centered on yourself, just as a story tends to follow the thought and action of its main character. Likewise, you are in a certain way the narrator, insofar as you make choices about the course of your life. Still, your control is incomplete, because you are not the first cause, and because although you can make your own choices, many other circumstances and events are outside your power.

Darwin Catholic, discussing the nature of plot in stories, remarks:

And yet not just any journey will do. The sense in which plot is an artificial product of what an author does, it that an author has the duty of focusing the events in the story down to just those which somehow relate to the journey which is the plot. This can be tightly focused or loosely focused. In a spy thriller, the purpose of every scene may be to put all the pieces of the puzzle together. Some pieces which originally seem to be off, unrelated to the others, will as we proceed prove to be part of the same cohesive image being revealed as other pieces of the puzzle are put together.

But in what we often call a “character driven” or “theme driven” novel instead of a tightly plotted novel, the importance of relevancy is still there. Even if the arc of a novel is “the events which happened in this character’s life”, for the novel to actually be gripping the author must subtly impost a filter whereby we not really seeing all the events. We see only the events which tie in to a thematic note or progression through which we see the character’s life. If, at the end of the novel, the reader looks back and says, “Why did you include that section? It seemed like it was going somewhere but it never resolved.” Then the author has failed to plot well.

In our real lives we have many of these dead ends, things which build up and seem important and then just trail off. A good novelist subtly prunes away these, leaving only what forms a coherent structure, and it’s that structure which is the plot. Fail to do that and you have only an amorphous mess of writing, however craftsman-like.

This suggests a second, perhaps somewhat unconscious, way in which we are the narrator of our story. Elsewhere I pointed out that we do not actually remember much of our lives. It is possible that we naturally prune away, by forgetting them, the “dead ends,” namely seemingly random events that lead nowhere, because they do not fit well into the plot of our lives.

Whether or not this kind of pruning occurs, however, we certainly do attempt to fit our lives into particular structures. So for example young men and women sometimes think of their lives as though it were a romance novel, which ends with marriage and “they lived happily ever after.” Since of course life goes on after marriage, once they are actually married, they quickly realize that the story must be of a somewhat different nature. Nonetheless, it is still possible to suppose that the previous part of their lives had the specific structure or plot of a story leading up to marriage. But suppose someone has a troubled marriage that ultimately leads to a divorce. This person may have significant difficulty understanding their life. They will still desire to understand it as a kind of story, but the previous interpretation no longer works. It feels rather like a Harry Potter story that ends “with Harry Potter being tortured to death and the Dursley family dancing on his grave.” In such a case, people will tend to go back and rewrite the story from the beginning, not in the sense of changing the events, which is impossible (although selective memory may help here, as noted above), but by composing a new plot. Thus for example the new plot may involve the marriage as a learning experience, rather than as the goal of life.

 

Morality and Stories

Given the fact that stories are one of the most effective way of convincing people of things, they are also one of the ways most used for teaching morality, both to children and to adults. While Aesop’s Fables are one of the most evident examples here, this seems to be the case more generally.

Stories perform this task in a number of ways. The most basic way is by making moral claims a part of the real or supposed background in common with the real world, in the way discussed in the previous post. I pointed out earlier that we learn morality from the real world by noticing that our actions have effects that are good and bad even apart from morality. Stories can be even more effective than reality in this respect, because while “bad things will tend to happen if you engage in this kind of behavior” may well be true even in the real world, it can be made even truer in stories. Nury Vittachi describes this aspect of stories:

These theories find confirmation from a very different academic discipline—the literature department. The present writer, based at the Creativity Lab at Hong Kong Polytechnic University’s School of Design, has been looking at the manifestation of cosmic justice in fictional narratives—books, movies and games. It is clear that in almost all fictional worlds, God exists, whether the stories are written by people of a religious, atheist or indeterminate beliefs.

It’s not that a deity appears directly in tales. It is that the fundamental basis of stories appears to be the link between the moral decisions made by the protagonists and the same characters’ ultimate destiny. The payback is always appropriate to the choices made. An unnamed, unidentified mechanism ensures that this is so, and is a fundamental element of stories—perhaps the fundamental element of narratives.

In children’s stories, this can be very simple: the good guys win, the bad guys lose. In narratives for older readers, the ending is more complex, with some lose ends left dangling, and others ambiguous. Yet the ultimate appropriateness of the ending is rarely in doubt. If a tale ended with Harry Potter being tortured to death and the Dursley family dancing on his grave, the audience would be horrified, of course, but also puzzled: that’s not what happens in stories. Similarly, in a tragedy, we would be surprised if King Lear’s cruelty to Cordelia did not lead to his demise.

Indeed, it appears that stories exist to establish that there exists a mechanism or a person—cosmic destiny, karma, God, fate, Mother Nature—to make sure the right thing happens to the right person. Without this overarching moral mechanism, narratives become records of unrelated arbitrary events, and lose much of their entertainment value. In contrast, the stories which become universally popular appear to be carefully composed records of cosmic justice at work.

A second way that stories teach morality is by directly indicating to people what is approved of and disapproved of by society. This is a way not only of suggesting that bad things will result from bad behavior, but of ensuring that to some extent, it is actually true. For “being approved of” is naturally felt by a person as a good thing, and “being disapproved of” a bad thing.

A third way, perhaps not entirely distinct from the second, is by presenting some characters as worthy of imitation, and other characters as unworthy.Thus for example some people object to The Godfather on the grounds that it presents criminals in an interesting and attractive light, thereby appearing to put them forward as people to be imitated.

Convincing By Stories

When someone writes a story, something is being invented. It is not merely a narration of facts, since otherwise it would not be a story at all, but a history, or some other kind of account regarding the world as it is.

Nonetheless, there is always something in common with the real world, or something implicitly supposed to be in common with the real world. Thus for example The Betrothed presupposes and sometimes mentions actual facts about seventeenth century Italy, even while including an invented narrative about individual persons. Similarly, the film Interstellar  presupposes and sometimes mentions various scientific facts about the universe, even while adding various other things which almost certainly cannot exist in the real world, like time travel.

It is not difficult to see that it is essential to stories to have such a background in common with the real world, for if there were absolutely nothing in common with the real world, the story would be unintelligible. Among other things, a story must follow the laws of logic, at least most of the time, or it will be impossible to understand it as presenting an intelligible narrative. Consequently, a story will make sense to us insofar the background, real or supposedly real, makes the invented narrative a plausible and interesting one. Thus Manzoni’s novel must present a narrative that seems like a possible one in the context of seventeenth century Italy. Likewise, if the background implies that the invented narrative is highly implausible, the story will not make much sense to us. Thus, for example, while I enjoyed most of Interstellar, my experience was somewhat spoiled by the addition of time travel, and this generally tends to be the case for me when stories involve this particular idea. This is largely because time travel is probably logically impossible. To the degree that other people do not think that it is, or do not feel as if it were, it is less likely to disrupt their enjoyment of time travel stories.

The result of all this is that stories are one of the most effective ways to convince people of things. When we are giving our attention to a story, we are not in the mood for logical analysis or careful thought about the precise nature of the real world. And yet, in order to understand the story, we need to implicitly distinguish between the “real background” and the “invented narrative.” But in fact we may not be able to draw the line precisely; if someone does not know the details of the history of seventeenth century Italy, he will not actually know the difference between the things that Manzoni takes from the real world, and the things that Manzoni invents.The result is that a person can read the book, and walk away believing historical claims about Italy in the real world. These claims may be true, but they might also be false. And this can happen without the person having any explicit idea of learning history from a novel, and without noticing that he has become convinced of something which he previously did not believe.