Skip to main content

Thoughts on Newcomb's paradox

·738 words·4 mins·
Philosophy Paradoxes Newcombs-Paradox
Dani Rybe
Author
Dani Rybe

Recently, I had my curiosity intrigued by Newcomb’s paradox, which is a philosophical paradox devised by William Newcomb in 1960. I was never that great at introductions, so let me just shamelessly copy its description from Wikipedia to get everybody up to speed.

The problem
#

There is a reliable predictor, another player, and two boxes designated A and B. The player is given a choice between taking only box B or taking both boxes A and B. The player knows the following:

  • Box A is transparent and always contains visible $1,000.

  • Box B is opaque, and its content has already been set by the predictor:

    • If the predictor has predicted that the player will take both boxes A and B, then box B contains nothing.

    • If the predictor has predicted that the player will take only box B, then box B contains $1,000,000.

The player does not know what the predictor predicted or what box B contains while making the choice. The question is, what choice should they make to maximize the amount of money they get?

The paradox arises from the fact that it’s possible to construct seemingly airtight arguments in favor of both choices:

Of course you should only take box B! I have seen this game show in the past. Every time a player chooses both boxes, they leave with only $1,000, and every time they choose only box B, they leave with a million! The predictor IS reliable after all.
— boxB_enjoyer420

You sweet, sweet summer child… obviously you should take both boxes! The amount of money in box B has already been decided, so by taking both boxes you get whatever is in box B, plus an additional $1000! It’s a strictly better choice.
— ₓxXbothBoxesEnthusiastXxₓ

Both of these positions, at least to me, seem completely reasonable, which is a problem.

Of course, there is a third position that one may take:

This game show is so fake, and you two are too stupid to see it. It’s impossible to create a perfectly reliable predictor, so all your squabbles are completely meaningless.
— _TheEnlightenedSceptic_

This solution, though, isn’t very satisfying, isn’t it?

Even if it’s impossible to perfectly predict a person’s choice due to quantum mechanics/free will/god/etc,

I’m not at all sure that any of those things are strictly necessary for the existence of human-like conscious observers.

So here’s my attempt at resolving this paradox. Hopefully it’s interesting.

Proposed solution
#

So, imagine for a second a scenario where you agree to trip sit me. I, having taken an ungodly amount of shrooms, start seeing aliens in the sky and decide that the most rational course of action is to take my rifle and start shooting at them in an attempt to save the world. From my perspective, this seems like a rational thing to do, but I shouldn’t really do it of course, and you, as a sober person, agree.

This may seem like a random tangent, but I think that’s exactly what happens in the paradox scenario. The players perception of reality gets compromised in a subtle way and as a result, they perceive an apparent breakage of causality and an irrational choice of taking both boxes appears rational.

But how would their perception be compromised without their brain chemistry changing in some way? This is where it gets interesting. I’d argue that the process of predicting their choice is already enough. The device in question would need to collect a lot of information from inside the players brain in order to predict their choice. Information that the brain can normally assume to be isolated. And that changes the relationship between the brain and the outside world enough for the illusion of reality to break. In the presence of this unnatural, uncontrolled information tunnel, the brain fails to construct a convincing representation of reality, while allowing the player to pretend that they are an independent conscious observer with free will and can make rational decisions.

The cool thing about it is that unless the player knows that their choices are being predicted, the simplest theory that they can construct is that their choice directly determines the contents of the box, which, of course, cannot be explained by normal physics. And their theory would be correct, in the sense that it would be in line with experimental data.

Anyway, let me know what you think of this idea. Thanks for reading.