What does Newcomb's paradox teach us?

Benford, David H. Wolpert Gregory
Journal Title
Journal ISSN
Volume Title
In Newcomb's paradox you choose to receive either the contents of a particular closed box, or the contents of both that closed box and another one. Before you choose though, an antagonist uses a prediction algorithm to deduce your choice, and fills the two boxes based on that deduction. Newcomb's paradox is that game theory's expected utility and dominance principles appear to provide conflicting recommendations for what you should choose. A recent extension of game theory provides a powerful tool for resolving paradoxes concerning human choice, which formulates such paradoxes in terms of Bayes nets. Here we apply this to ol to Newcomb's scenario. We show that the conflicting recommendations in Newcomb's scenario use different Bayes nets to relate your choice and the algorithm's prediction. These two Bayes nets are incompatible. This resolves the paradox: the reason there appears to be two conflicting recommendations is that the specification of the underlying Bayes net is open to two, conflicting interpretations. We then show that the accuracy of the prediction algorithm in Newcomb's paradox, the focus of much previous work, is irrelevant. We similarly show that the utility functions of you and the antagonist are irrelevant. We end by showing that Newcomb's paradox is time-reversal invariant; both the paradox and its resolution are unchanged if the algorithm makes its `prediction' \emph{after} you make your choice rather than before.
Comment: Revised version with analysis extended and clarified; 22 pages, 1 table
Computer Science - Computer Science and Game Theory