Alien Implant: Newcomb’s Smoking Lesion

In an alternate universe, on an alternate earth, all smokers, and only smokers, get brain cancer. Everyone enjoys smoking, but many resist the temptation to smoke, in order to avoid getting cancer. For a long time, however, there was no known cause of the link between smoking and cancer.

Twenty years ago, autopsies revealed tiny black boxes implanted in the brains of dead persons, connected to their brains by means of intricate wiring. The source and function of the boxes and of the wiring, however, remains unknown. There is a dial on the outside of the boxes, pointing to one of two positions.

Scientists now know that these black boxes are universal: every human being has one. And in those humans who smoke and get cancer, in every case, the dial turns out to be pointing to the first position. Likewise, in those humans who do not smoke or get cancer, in every case, the dial turns out to be pointing to the second position.

It turns out that when the dial points to the first position, the black box releases dangerous chemicals into the brain which cause brain cancer.

Scientists first formed the reasonable hypothesis that smoking causes the dial to be set to the first position. Ten years ago, however, this hypothesis was definitively disproved. It is now known with certainty that the box is present, and the dial pointing to its position, well before a person ever makes a decision about smoking. Attempts to read the state of the dial during a person’s lifetime, however, result most unfortunately in an explosion of the equipment involved, and the gruesome death of the person.

Some believe that the black box must be reading information from the brain, and predicting a person’s choice. “This is Newcomb’s Problem,” they say. These persons choose not to smoke, and they do not get cancer. Their dials turn out to be set to the second position.

Others believe that such a prediction ability is unlikely. The black box is writing information into the brain, they believe, and causing a person’s choice. “This is literally the Smoking Lesion,” they say.  Accepting Andy Egan’s conclusion that one should smoke in such cases, these persons choose to smoke, and they die of cancer. Their dials turn out to be set to the first position.

Still others, more perceptive, note that the argument about prediction or causality is utterly irrelevant for all practical purposes. “The ritual of cognition is irrelevant,” they say. “What matters is winning.” Like the first group, these choose not to smoke, and they do not get cancer. Their dials, naturally, turn out to be set to the second position.

 

Advertisements

18 thoughts on “Alien Implant: Newcomb’s Smoking Lesion

  1. How did scientists discover that the black box is pre-set if they can’t look at it while a person is alive? Can you find people with the dial in the first position who died before they got the chance to smoke?

    Like

    • In general, we could flesh out the details of the story in many ways without changing the implication, namely that Newcomb’s problem and the Smoking Lesion are simply the same problem for all practical purposes.

      Perhaps the scientists discovered that the dial is pre-set by means of some nanobot motion detector which would detect any movement of the dial, but not its current location. Trying to detect the current location would presumably cause the explosion.

      Of course, stepping outside the story, we all know why you are not allowed to look at it while the person is alive: because you can choose this algorithm: “Check if the box is set to the first position. If so, do not smoke. If it is set to the second position, smoke,” and then the correlation will be necessarily broken. But there is no difference in this regard between the Lesion and Newcomb: take the case of Omega who flies away after the boxes are filled or not. You can break the correlation by implementing this algorithm: “Check the box for the million. If the million is there, take both boxes. If it is not, take only the empty one.” The natural response would be that in such a situation the box will be empty, and the person’s lust for money will lead him to take the thousand. The corresponding answer for the lesion, if we allow checking, would be that if you determine that you are set to cancer, you realize you are going to get cancer anyway, so you might as well smoke. And of course only people who have the dial set to the first position, ever check.

      In other words, there are various ways to deal with the possibility of checking, but this makes no relevant difference between Newcomb and the Lesion.

      “Can you find people with the dial in the first position who died before they got a chance to smoke?” This would depend on how we flesh out the details of the story; apparently people choose only once whether to be smokers or non-smokers, which isn’t terribly similar to our world. But my inclination is to say, “Sure, some people died the day before they made their choice, and some of them had it set to the first position, and some to the second.”

      Like

  2. I don’t think this argument works. If you initially intend to think this way, but it turns out that the box is writing to your brain to cause your choice, the box overwrites your intention.

    Of course, where the box is infallible as stated, it doesn’t matter what your initial intention was in the writing version. So you might as well assume that it is reading your intention since in that case it does matter what your intention is.

    But if the box merely creates a tendency rather than a certain choice, then if that is what it is in fact doing (rather than reading your brain) then it is rational to smoke as in the smoking lesion problem.

    Like

    • First, the scientists came up with the ideas of reading and writing, but not that these actually are not exhaustive. There could also be intermediate possibilities including both reading and writing. The box could be some sort of brain computer interface.

      Second, there is no need for any overwriting. All adults have the boxes, so whatever your initial thoughts about the matter, that is what was written. So there is no need for overwriting. Also, even if something did get overwritten, that would just feel like changing your mind, which is a thing that happens often to human beings. So it would feel quite normal.

      Third, we need to start with what we agree on, and then use that to consider the things we disagree about. You say that you “might as well assume that it is reading,” so we agree that in the story as written, it is rational to choose not to smoke, even though we disagree on the reason.

      You start to discuss the disagreement when you say, “But if the box merely creates a tendency…” Let us think about this. We will modify the story:

      “The correlation was very strong, but not perfect. Out of each thousand persons who chose to smoke, on average 999 had their dial set to the first position, and died of cancer. On average one person had his dial set to the second position, and he did not get cancer. Likewise, out of each thousand persons who chose not to smoke, on average 999 had their dial set to the second position, and they did not get cancer. And on average one had his dial set to the first position, and that unfortunate fellow got cancer even though he did not smoke.”

      “The scientists still did not know for sure whether the box was reading or writing.”

      I assume you agree with me that it is still rational not to smoke in this situation, especially since it could just be that you have a predictor which is just nearly perfect.

      On with the story:

      “Two years later, the scientists conclusively ruled out the possibility of prediction. They now know that the box is writing, but only in a way that creates a very strong tendency, so that 99.9% of people follow the tendency.”

      “The great philosopher Simon said, ‘Since the box merely creates a tendency rather than a certain choice, and this is what it is in fact doing, rather than reading, then it is rational to smoke as in the smoking lesion problem.’ Thousands of people were persuaded by his eloquence. Unsurprisingly, 999 out of 1000 of those convinced had their dials set to the first position, and choosing to smoke, they died of cancer. Likewise, 999 out of 1000 of the evidential decision theorists, who refused to be convinced by Simon’s arguments, chose not to smoke, and had their dials set to the second position. They survived, unlike Simon’s followers.”

      Are you still in favor of following the philosopher’s opinion and joining the dead group, rather than following the evidential decision theorists and surviving?

      Like

      • What we agree on:

        I agree that it is correct to 1-box in Newcomb’s problem (i.e. not to smoke in this problem if it is reading and not writing).

        What we disagree on:
        You say whether it is reading or writing doesn’t matter, and you seem to be arguing from that that it is correct to not smoke in the smoking lesion problem.

        I disagree, thinking it is correct to smoke in the smoking lesion problem.

        Your first modified story:

        ““The scientists still did not know for sure whether the box was reading or writing.”

        I assume you agree with me that it is still rational not to smoke in this situation, especially since it could just be that you have a predictor which is just nearly perfect.”

        Yes, as I would agree that it is correct not to smoke in this case.

        Your second modified story:

        “Two years later, the scientists conclusively ruled out the possibility of prediction. They now know that the box is writing, but only in a way that creates a very strong tendency, so that 99.9% of people follow the tendency.”

        “The great philosopher Simon said, ‘Since the box merely creates a tendency rather than a certain choice, and this is what it is in fact doing, rather than reading, then it is rational to smoke as in the smoking lesion problem.’ Thousands of people were persuaded by his eloquence. Unsurprisingly, 999 out of 1000 of those convinced had their dials set to the first position, and choosing to smoke, they died of cancer. Likewise, 999 out of 1000 of the evidential decision theorists, who refused to be convinced by Simon’s arguments, chose not to smoke, and had their dials set to the second position. They survived, unlike Simon’s followers.”

        You say:

        “Are you still in favor of following the philosopher’s opinion and joining the dead group, rather than following the evidential decision theorists and surviving?”

        That’s a very presumptuous way of stating it. The scenario we are considering here is this:

        1) The dials are set before considering the arguments and making the decision, and the setting does not change as a result of the decision, and is not originally set by reading anything about your brain.

        2A) If the dial is set to the first position, you get brain cancer with certainty.

        2B) If the dial is set to the second position, you certainly don’t get brain cancer.

        3A) If the dial is set to the first position, the box writes to your brain to give you a very strong tendency to smoke.

        3B) If the dial is set to the second position, the box writes to your brain to give you a very strong tendency not to smoke.

        In this scenario, absolutely nothing about your decision, or what decision theory you use, etc. affects whether you get brain cancer or not – it is predestined by your box, which has nothing to do with your thinking.

        One way the box might give you a strong tendency not to smoke is by writing to your brain giving you a strong tendency to be an evidential decision theorist. So, the fact that you are an evidential decision theorist is strong evidence that your dial is set to the second position. But, the evidential decision theory did nothing to help you in this respect. It is merely an unfortunate side effect of the fortunate position of your dial. The evidential decision theory is merely taking credit for the dial setting that it did nothing to affect. The only actual effect of evidential decision theory here is giving you a small chance to fail to smoke (while still getting brain cancer) if you managed to talk yourself into evidential decision theory despite having the dial set to the first position.

        On the other hand if you decide to smoke, this is evidence your dial is set to the first position, but the decision theory you use is doing nothing to cause the dial setting. It is just giving you some chance to smoke, without getting brain cancer, if you manage to accept a sensible decision theory despite having the dial set to the second position.

        Note this all contrasts with Newcomb’s problem where it is rational to, before the predictor examines your brain, reprogram yourself to precommit to 1-box. In this case, the 1-boxing decision theory is doing real work for you (when the predictor examines you).

        Like

        • If I understand your response correctly, it says this:

          “In that situation, I would say: if I smoke, there is a small chance I won’t get cancer. On the other hand, if I don’t smoke, I will probably turn out to be one of the lucky ones that don’t get cancer. But my decision theory will have had nothing to do with this. So I’m going to smoke, even though I will have a 999/1000 chance of getting cancer, since my tendency to go forward with reasoning like this is very likely because my dial is set to the first position.”

          Do you agree that you are saying this?

          Like

            • I would say that’s a pretty bad ritual of cognition, and would prefer to be “lucky”, rather than dead. I’m not sure if there’s more to say on this question, except perhaps this:

              We never have 100% probabilities in real life. So even if the correlation were perfect, as in the original story, we would not know for sure that it is perfect. So if it were discovered we were in the writing case, with the perfect correlation, you would still choose to smoke, even though it would essentially certain that doing that would kill you?

              Like

              • If the correlation is perfect, and the box is writing, not reading, then there is no point pretending that you have any control over whether you smoke or not.

                About the “we never have 100% probabilities”: this also applies to whether the box is just writing or also doing some reading. So, the decision would have to depend on the relative probabilities of escaping the correlation v. the box reading, and the relative severity of the cancer v. the benefit of smoking.

                Like

                • “there is no point pretending that you have any control over whether you smoke or not”

                  This is the same as saying, “If determinism is true, there is no point in pretending that you can make choices.”

                  You have no choice about whether to make choices, even if determinism is true; and in the same way you would have to decide to smoke or not smoke, even in the writing case.

                  Like

  3. There’s plenty of point pretending for regular determinism, as the mental attitude of taking your decisions seriously will in fact lead to better of outcomes. Here, it won’t, only the box setting does.

    (I considered clarifying the “no point pretending” statement after posting it, as it includes an implied assertion that you don’t have control which is very unclear in this case, then decided it was too hard to figure out what “you” and “control” actually means and left it there as the statement itself is literally true, even if the implied assertion is unclear.)

    Like

    • In the infallible case, choosing not to smoke will lead to a better outcome than choosing to smoke. The box setting will correspond.

      You say it is pointless to choose to smoke or not in that situation. So what would you do instead of choosing? Lie down and die?

      Like

  4. Well no.

    see:

    http://lesswrong.com/lw/r0/thou_art_physics/

    The difference is, in this case the causal diagram really does look like the first one in that linked post, not the later ones, with the box taking the spot of “physics” in that diagram. There is also, of course, an arrow from the box to you, setting your smoking decision, but that is irrelevant – the point is that whether you get cancer is wholly decided by the box setting and no causation of that flows from your decision to smoke, your prior tendencies which would affect your decision, your decision theory on how to decide to smoke, any arguments from anyone considering decision theory, etc.

    There is correlation, but the causation flows entirely directly from the box and your mental state is not even an intermediate step in that causation.

    Whereas, if you decide to lie down and die, your death follows directly from that decision, which is a completely different situation.

    Like

    • You didn’t tell me yet what you would choose to do in the writing situation. Saying, “but I don’t have a choice in that situation,” does not say what you would do. And since it would be written that you would choose to do something, you would choose to do something. What would you choose to do?

      “in this case the causal diagram really does look like the first one in that linked post”… Wrong. It looks like the second, with “physics” being “physics and the box.”

      “There is also, of course, an arrow from the box to you, setting your smoking decision, but that is irrelevant – the point is that whether you get cancer is wholly decided by the box setting and no causation of that flows from your decision to smoke”… Even in Eliezer’s post, we could make another flow chart : Physics ten thousand years ago > Physics and me today… and all the causality would flow from ten thousand years ago to today, and none from today to ten thousand years ago.

      My argument is that causality doesn’t matter; getting the good result does, and you do that by not smoking.

      Like

      • In the writing situation, I would prefer to choose to smoke, but I would end up doing so if and only if the dial were set to the first setting. For all other choices, I would choose normally.

        ” It looks like the second, with “physics” being “physics and the box.””

        No, it does look like the first diagram, and does not look like the second. Recall that:

        a) the box creates the cancer by directly releasing chemicals depending on the dial setting
        b) the dial setting is not selected by reading the mind

        So, there is no causal arrow pointing from “me” to “cancer” as shown in the second diagram. There is only a causal arrow from the box to “cancer” (and another causal arrow from the box to “me”, not shown on that diagram, but unnecessary to the point).

        “Even in Eliezer’s post, we could make another flow chart : Physics ten thousand years ago > Physics and me today… and all the causality would flow from ten thousand years ago to today, and none from today to ten thousand years ago.”

        Yes, and that’s totally different from this situation! Physics from 10 thousand years ago controls the future via the present (including one’s own state), but the box does not control the cancer via the smoking decision. It controls the cancer purely through its own release of chemicals – the effect on the smoking decision is just a side effect.

        “My argument is that causality doesn’t matter; getting the good result does, and you do that by not smoking.”

        I’m all for getting the best results, but, you aren’t getting good results from your decision theory, but from the dial setting which isn’t affected by your decision theory.

        Like

        • “In the writing situation, I would prefer to choose to smoke.” That seems pretty foolish.

          Again, my point is that being caused to be in a good situation is just as good as causing it. There is no need to stamp your foot and say if you don’t get to cause it, you don’t want it.

          Like

  5. You’ve left out a fourth possible kind of response: those who note that the argument is utterly irrelevant, and yet still smoke, get cancer, and die, with dials found to be in the first position.

    Underlying the assumption that this fourth group doesn’t exist is the assumption of free will. Ignoring this fourth group suggests begging the question. And when we try and connect your analogy to our real world, we see that this fourth group exists, at least to the extent that we can trust people’s communications about their own beliefs.

    The reality is that no one who disbelieves in free will believes that it matters in the least. (I avoid the word “determinism” because it invites arguments about randomness, which I think are irrelevant to the free will question.) If we haven’t free will, then we haven’t the free will to disbelieve free will. Where it matters is where we treat questions differentially– where we say that the subject has free will in one domain, but not in another. The argument from the free will skeptic is that no such distinction is appropriate. So let’s change your argument a little bit and see what comes out.

    There’s a switch that will make eventually people believe themselves to be Napoleon. For a long time, this switch is assumed to be set at birth– but it really doesn’t matter for the free will argument when it is set, or what switches it. (Note that all three of your groups exist in regards to this switch, and that my fourth group also exists.) One day the CIA discover a magic spell and find that every target at which they cast it eventually believes him or herself to be Napoleon, and post-mortem examination shows the Napoleon switch set appropriately. Statistical analysis shows this to happen much more often than mere chance would suggest, so we assume causality. This is used by the CIA to Napoleonize and thus nullify political dissidents. But many CIA operatives find this ethically distasteful and refuse to participate in the program. A second switch is discovered: one which correlates perfectly with which CIA operatives participate. As more and more of these operatives die and are examined, this correlation is established beyond any reasonable doubt. Indeed, the CIA eventually discover a second spell which appears to cause CIA operatives to tolerate Napoleonization. However, not all CIA operatives are willing to use this second spell.

    Now: does it make any sense to have one opinion about the first switch, yet a different opinion about the second switch? Does it make any sense to hold that behaviors of people suffering from schizophrenia are any more or less free than behaviors of people not?

    Like

    • It’s kind of amusing that I had to remove your comment from the spam filter in order to publish it. Apparently even an algorithm was able to detect that it was not relevant to the post.

      The post is about decision theory, not about free will.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s