If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Compound probability of independent events

You'll become familiar with the concept of independent events, or that one event in no way affects what happens in the second event. Keep in mind, too, that the sum of the probabilities of all the possible events should equal 1. Created by Sal Khan.

## Want to join the conversation?

• At "But these are independent events. What happens in the first flip in no way affects what happens in the second flip..."
"...where someone thinks if I got a bunch of heads in a row ..then all of a sudden becomes likely on the next flip to get a tail. This is not the case"

Since H and T are equally likely events, and if I get more heads in the beginning then don't I have to have a better chance to have a tail (as we approach infinite they must be 50%-50%)? Can you please explain •  • Let me get this straight.
So if I get 5 H (heads) in a row flipping a coin, it's not more likely that my sixth will be a T (tails).
I understand that.
But isn't it also true that the probability of six coins flips resulting in at least one T is very high? I mean, it's unlikely that you'll get 6 H in a row, with no Ts. The probability of the sequence HHHHHH is low.
So I'm thinking that although it is very likely that you will get at least one T in your sequence of 6 flips, it is not necessarily likely that the next flip will have a T.
But say for this purpose that you are playing a ridiculous game where you can win a million dollars for flipping a fair coin 5 times and getting one T. If you don't, you lose a million dollars.
You have flipped 4 times and gotten all Hs. You know that the probability of getting at least one T is very high. Because you haven't gotten any Ts yet, doesn't that mean that you have a very good chance of getting a T on your next flip?
Or is it that the chance of getting a T on your last flip is equal to the chance of getting a T any of your other times?
I'm confused. • Here's a more detailed answer:

The probability of flipping six heads in a row is very low before you start. This is a single independent event, not six different ones. However, once you have already flipped five heads, those five coins that you just flipped no longer matter. Thus, the probability of flipping a sixth head becomes the same as the probability of flipping a head for a single coin, since that's exactly what your sixth coin is: a single coin, unaffected by the previous five.

So as a summary:
We are comparing two single events: Flipping six heads in a row and flipping one head in a row.
• Okay, here you multiply the events, but when do you add them? What is the distinction between adding and multiplying? Or in other words: When do I use what? • You can add probabilities of events if they are interchangeable. For instance if you roll a dice you can't get both three and four. Just one or none of them.
On the other hand if you roll it twice these events are independent from each other. So here you've got to multiply the events.
• Hey... I have a very weird question.
What if there was a family who decided to have only 5 children (but they don't have twins or anything) and give different names for boys and girls according to what... uh, 'rank' of child it is? ex. 1st child, 2nd child, etc. All boy names start with a B, and girl names start with G. The total outcomes are 32. I found that after 2 days of writing them down. But how to solve this mathematically? I 5 to 2nd power at first, but as it is 25, i thought 2 to the 5th power, which is 32- but i still didn't find an explanation. • It is 2^5 - you can think of it as listing the number of choices for each "spot"
1st child - two choices
2nd child - two choices
3rd child - two choices
4th child - two choices
5th child - two choices
Then you multiply all of the different options for each spot and you get 2*2*2*2*2 = 2^5 = 32
• What is the difference between a mutually exclusive and independent event?

Thanks • Interesting question! Many students confuse these two concepts.

Events A and B are called mutually exclusive if they cannot both occur, that is, P(A and B) = 0. In this situation, P(A or B) = P(A) + P(B).
Events A and B are called independent if the occurrence of one event has no effect on the probability of the other event occurring. In this situation, P(A and B) = P(A)*P(B).

Example: suppose two dice are rolled. Let A represent the event that the first die is a 1, let B represent the event that the first die is a 6, and let C represent the event that the second die is a 6.

A and B are mutually exclusive because the first die cannot be both a 1 and a 6. Note that A and B are not independent, because knowing that the first die is a 1 would eliminate the possibility that the first die is a 6 (that is, knowing that the first die is a 1 changes the probability that the first die is a 6, from 1/6 to 0).

A and C are independent, because knowing that the first die is a 1 has no effect at all on the probability that the second die is a 6. Note that A and C are not mutually exclusive, because it is possible for the first die to be a 1 and the second die to be a 6 (the probability that these both occur is 1/36, which is not 0).

Have a blessed, wonderful day!
• • Why did Sal multiplied the probabilities of getting heads in each of the coins in order to find the probability of getting heads in the both coins simultaneously ? • If you flip a coin 17 times, what is the probability that the number of heads flipped will be the same as the number of tails flipped? • I still feel a little unsure on this, the maths makes sense but I am not sure on the understanding in case I need to apply it to a different question? Any explanations? • Regarding gamblers fallacy, I have hard time understanding as to why it doesn't work.
Let's say you flip an unbiased coin 5 times, and all 5 times it was tails.
The probability tells you, since this is an independent event, the next time you flip a coin, it will still be 50% that you will get heads and 50% that you will get tails. If, however, you consider it as a compound event, there's 1/(2^6), about 1.5% that you will get 6 heads or tails in a row.
I wrote a small program to simulate coin flipping and count the number of repeats in a row:
And here's an example of how many times a head or a tail will repeat in a row if you flip a coin 50,000 times:
1: 12590 ; 2 : 6310 ; 3 : 3112 ; 4 : 1573 ; 5 : 796 ; 6 : 357 ; 7 : 183 ;
8 : 93 ; 9 : 49 ; 10 : 20 ; 11 : 12 ; 12 : 10 ; 13 : 4 ; 14 : 1 ; 15 : 2
The above numbers clearly follows the 50,000*1/(2^(n+1)) pattern, where n is the repetition number.
Generalizing, regardless of a sample size 1/(2^(n+1)) is the percentage of likeliness of certain number repeating.
So, if gamblers fallacy is true, how this phenomena with getting same events in a sequence should be explained? 