The scenario is the following: You are on a game show and you are faced with three doors. Behind one of the doors is your dream prize (my very own lightsaber) while behind the other two doors are dud prizes (a homeopathic contraceptive). You are given the opportunity to select one of the doors and attempt to win your dream prize. After you select a door but before revealing the location of the prize, the host reveals to you one of the doors that did NOT contain the prize, leaving the door you selected and one other. The host then asks you whether you wish to stick with the door you originally selected or change your decision and choose the other remaining door.

This problem is referred to as the 'Monty Hall' problem and the question is whether a) you should stick with your original decision, b) change your decision and selected the other remaining door or c) it does not make a difference either way.

So the answer is that if you wish to increase your chances of winning your dream prize you proceed with option b). That is you should abandon the door you originally chose and select the other door. By doing this you increase your chances of winning from 33% to 66%.

There are a number of places where people can go wrong with this puzzle, however in my experience the most common response is c). That it makes no difference whether you switch or stay. This response suggests that there is an equal probability of the prize being behind each door or that you have a 50% chance of winning regardless of the door you choose. This is in fact incorrect for reasons I'll now attempt to explain.

Hopefully you can all see that to begin with, the probability of guessing the door which contains the prizes is 33% or 1 in 3. This also means that there is a 66% chance that the door you chose does not contain the prize. That is, there is a 66% chance that the prize resides behind one of the other two doors.

When the host reveals one of the doors that does not contain the prize the chance of your door containing the prize does not change, it remains at 33%. Importantly though we know that the probability that the prize is behind one of the doors is 100%. So given there are two doors remaining and the probability of your door containing the prize is 33%, the probability of the other door containing the prize must be 66%. There was a 66% chance that one of the other two doors contained the prize. Now however we know that one of those doors does not contain the prize, therefore the other door must account for the remaining 66%.

If you're not yet convinced by my solution or you're having trouble getting your head around it, imagine the exact same scenario only instead of 3 doors to guess from there are now 100 doors. Your chance of guessing the correct door is now 1% or 1 in 100. So you guess a door and then I take away 98 of the doors that did not contain the prize leaving just 2 doors. The one you chose and one other. I now ask you whether you'd like to stick with your original choice or switch doors, what do you do? You switch obviously because you know that the chance of getting it right the first time was 1 in 100 (1%). The probability in this case that the other door has the prize is 99%.

No one emailed me the correct answer which means I hold onto the used handkerchief. The good news though is that this means the prize pool doubles for the next time I post a problem. So check back in if you're eager to win the handkerchief plus an additional prize of equal or lesser value.

James

Are you studying Psychology@UQ and want to contribute to theuqpsycblog??Send Will an email to find out how: will.harrison@uqconnect.edu.au
Nice explanation, although your numbers don't add (rounding error; it's 67%). But here's a simpler explanation.

ReplyDeleteLet's say you pick door number 1 initially. Now let's just work out what happens if you switch. First, let's say the prize is behind door number 1. If you switch, you lose.

Let's say you pick door number 1 and the prize is behind door number 2. Monty (from "let's make a deal"), CAN'T show you door 1 or 2, so has to show you door 3. You switch, you win. That's 1/3.

Same if the prize is behind door #3. Now Monty has to show you door #2, and if you switch, you win again. That's 2/3, or 67% of winning if you switch.

Obviously, the same numbers hold if you play out the other combinations (pick door 2, etc.). Just another way of seeing the answer.

Nice explanation, although your numbers don't add (rounding error; it's 67%). But here's a simpler explanation.

ReplyDeleteLet's say you pick door number 1 initially. Now let's just work out what happens if you switch. First, let's say the prize is behind door number 1. If you switch, you lose.

Let's say you pick door number 1 and the prize is behind door number 2. Monty CAN'T show you door 1 or 2, so has to show you door 3. You switch, you win. That's 1/3.

Same if the prize is behind door #3. Now Monty has to show you door #2, and if you switch, you win again. That's 2/3, or 67% of winning if you switch.

Obviously, the same numbers hold if you play out the other combinations (pick door 2, etc.). Just another way of seeing the answer.

Thanks for the alternative explanation, Bart. James spent a few days trying to work out the best way to explain the solution, and like he said, it's not easy.

ReplyDeleteI've actually tried using your example to explain the solution to an ex-boyfriend of a friend. I went a step further and drew out all the possible outcomes for any one given initial door choice on a bit of paper, but he just kept saying, "No you idiot, why can't you see that when there are only two doors left it can only be 50/50!?!"

That guy was the only Canadian jerk I've ever met, but this story actually makes me fonder of the Monty Hall problem.

you could just say "it's the difference between prior probabilities and posterior probabilities." He won't know what you're talking about, but it might shut him up.

ReplyDeleteIf not, then play a game for money with him. use it to divide rewards among different doors, and have someone play Monty Hall (it would have to be someone he trusted). Then when he loses 2/3 of his money over time, tell him you'll buy him a beer for his stupidity.

Cheers Bart

ReplyDeleteI was waiting for someone to pick up on the fact that my probabilities don't add to 100%.

@Will - You definitely should have taken advantage of his stupidity.

Along the same lines: a man has two children, one of which is a girl. What is the probability that the other is a girl?

ReplyDeleteHi John,

ReplyDeleteI've had a think about your question and here is my thinking: the chance of having two girls from the outset is .5 for the first child times .5 for the second, or .25 for two girls. However, the fact that we know one child is a daughter has no bearing on whether the second one is a boy or girl. That is, each .5 probability is independent of the other.

So my answer is that there is a .5 probability that the second child is a girl.

Will,

ReplyDeleteYou just committed the same line of reasoning that trips up most people in the Monty Hall problem. Try this: The only possibility ruled out by the information given is two boys. That leaves Boy-Girl, Girl-Boy, and Girl-Girl. So, 1 (G-G) /3 (B-G,G-B, G-G) = 1/3 that the other is also a girl.

Interesting... thanks for replying!

ReplyDeleteI guess it depends on whether or not B-G and G-B really are two separate instances. I guess the original wording of the question ("one of which is a girl") implies that either child could be a girl, in which case I'd agree with you, John. However, is it unreasonable to assume from the question that we know which child is a girl, in which case the probability would be 1/2?

Will,

ReplyDeleteI don't think that is a reasonable assumption, though, although it does change things, as you note. It gets even more confusing if you add what appears to be totally irrelevant information. For example: a man has two children, one of which is a girl born on Tuesday. What is the probability that the other is a girl? Now, the probability is almost 1/2 (13/27). Interesting, eh?

We can make it even closer. Imagine the claim was a man has two children, one of which is a girl born on the 17th day of the first 21 days of any month. What is the probability that the other is a girl? In this case, it would be 41/83 = 0.493975903614458.

ReplyDeleteConversely, a man has two children, one of which is a girl born in the fist one-half of the year (assume that means in the first 365/2 = 182.5 days), then the probability is 3/7 (assuming, as always, year-half is equiprobable).

To get a feel for this, you have to imagine the specified population. In the latter case, it is girls and first half of year. So, this can happen two ways, the first child is a girl and born in the first half, OR, the second child is a girl born in the first half. That defines 8 cases intersecting at girl and first-half (i.e., 4 cases of girl and 4 cases of first-half), with the intersection defined for each. So, subtracting the overlap, we get 7 unique (and assumedly equiprobable cases), only 3 of which meet the conditions.

Will,

ReplyDeleteWhat is even cooler, is if you assume an infinity of (or just large number of) seemingly irrelevant (i.e., uncorrelated) conditions, the probability approaches 1/2 --- your original conclusion. So, one way to characterize your original answer to the question: `` a man has two children, one of which is a girl. What is the probability that the other is a girl?'' is that you were assuming an infinity of orthogonal conditions.

Said differently, it is the assumption that the probability holds over *all other possible conditions*. Which, I must admit, is in the spirit of the original question. But, you can also see how silly that assumption must generally be.

Anyway, thanks for the fun, and I really did not mean to hijack your blog. My usual email is: vokey@uleth.ca. And, yes, Dr. Tangen was my student.