Newcomb’s Paradox

via Decisions, decisions | Annoying Precision.

Newcomb’s paradoxis the name usually given to the following problem. You are playing a game against another player, often called Omega, who claims to be omniscient; in particular, Omega claims to be able to predict how you will play in the game. Assume that Omega has convinced you in some way that it is, if not omniscient, at least remarkably accurate: for example, perhaps it has accurately predicted your behavior many times in the past.

Omega places before you two opaque boxes. Box A, it informs you, contains $1,000. Box B, it informs you, contains either $1,000,000 or nothing. You must decide whether to take only Box B or to take both Box A and Box B, with the following caveat: Omega filled Box B with $1,000,000 if and only if it predicted that you would take only Box B.

What do you do?

(If you haven’t heard this problem before, give it some thought first and maybe read the post linked to above before going on to my own thoughts on the matter)

 

 

 

 

 

 

 

I originally made the two box argument, based on the idea that whatever Omega had done was done and couldn’t be retroactively adjusted, but I haven’t actually been told that its method doesn’t involve monitoring my thought processes right to the end and only placing the money when it is sure that it knows what I will do (which apparently according to recent studies may well be somewhat before I know myself). If I were told that whatever money would ever be in the boxes was already there, then I would indeed take them both (and be pretty certain of getting only $1000).

But if there is any way of convincing Omega that I will choose just B then surely I should give it a try. So here goes!

Whether I choose one box or two depends on whether and to what extent I value the experience of proving Omega wrong. It costs me only $1000 to abandon Box A. If I do that I gain either $1000000 or the experience of proving Omega wrong. If I choose both I get $1000 plus $1000000 and the experience of proving Omega wrong, or just $1000 and the uncomfortable knowledge that Omega was right. Clearly I choose just B;  if Omega has any idea at all of how I think I’ll be $1000000 richer than I am now, and if it does not then at least I should be able to make more than $1000 on the talk show circuit explaining how I “defeated the oracle”.

Of course I must also convince Omega that I won’t change my mind, and the only way I know how to do that is to just commit firmly to not changing it and somehow forcing myself to be the kind of person who wouldn’t be tempted to risk the big prize for a relatively paltry increment. (It would be interesting to see how my response there might depend on the relative sizes of the amounts in the two boxes)

P.S. In a way, this last bit about “forcing myself to be the kind of person…” is, I think, related to a comment made (after I tried to submit the above) by the blog owner Qiaochu Yuan where he re-phrases Newcomb’s paradox in the form: “Imagine that you and Omega are both programs. Omega has been given your source code as input, and it makes predictions by analyzing that input. As a program, you can only execute your source code, but the question is what kind of source code you should want and why (equivalently, if you were a program that could modify your own source code, what would you want to modify it to and why?).”

 

This entry was posted in uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *