If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## AP®︎/College Statistics

### Unit 8: Lesson 6

Parameters for a binomial distribution

# Expected value of a binomial variable

Deriving and using the expected value (mean) formula for binomial random variables.

## Want to join the conversation?

• X = Y1 + Y2 + Y3 ...+Yn. How are they equivalent?
Edit: i think i see it now. Sal should have defined Y properly. Y: # of successes in 1 trial. Since X is defined as '# of success in 10 trails', then X = Y1 + Y2 + Y3+....+Y10. I think Sal should slow down a bit here, and properly define what is Y.
E(Y) would then be: expected # of successes in 1 trail, which is =[P(0)x0+P(1)x1], and given that P(1) is 0.3 in the example, E(Y) will be [0.7x0+0.3x1], which is =0.3, which is =P(1). • Notice that X is a binomial variable, whereas Y is a bernoulli variable, the simplest case of a binomial variable. • where is explained the sum of independent variables? E(X+Y) does not makes sense to me • Let
𝑋 = {𝑥₁, 𝑥₂, 𝑥₃}
𝑌 = {𝑦₁, 𝑦₂}

Thereby,
𝐸(𝑋) = (𝑥₁ + 𝑥₂ + 𝑥₃)∕3
𝐸(𝑌) = (𝑦₁ + 𝑦₂)∕2

Also,
𝑋 + 𝑌 = {𝑥₁ + 𝑦₁, 𝑥₂ + 𝑦₁, 𝑥₃ + 𝑦₁, 𝑥₁ + 𝑦₂, 𝑥₂ + 𝑦₂, 𝑥₃ + 𝑦₂}

This gives us,
𝐸(𝑋 + 𝑌) = (𝑥₁ + 𝑦₁ + 𝑥₂ + 𝑦₁ + 𝑥₃ + 𝑦₁ + 𝑥₁ + 𝑦₂ + 𝑥₂ + 𝑦₂ + 𝑥₃ + 𝑦₂)∕6
= (2𝑥₁ + 2𝑥₂ + 2𝑥₃ + 3𝑦₁ + 3𝑦₂)∕6
= (𝑥₁ + 𝑥₂ + 𝑥₃)∕3 + (𝑦₁ + 𝑦₂)∕2
= 𝐸(𝑋) + 𝐸(𝑌)

This example can quite easily be generalized to where 𝑋 has 𝑚 elements and 𝑌 has 𝑛 elements.
• Why is X taken as a sum of Y's? • As the mean/expected value of a Bernoulli distribution is p and the mean/expected value of a binomial variable is np, is a binomial variable a multiple of a Bernoulli distribution? • What is the expected value of a variable like:
"Flip a fair coin until you get tails. X = the number of heads you flipped."
I realize this wouldn't be a binomial variable, but it seemed pretty similar.

Note: P(H) = P(T) = 0.5 • Let X~Bin(n,p),find E(e^(tx) where t is a constant • Nice question! The plan is to use the definition of expected value, use the formula for the binomial distribution, and set up to use the binomial theorem in algebra in the final step.

We have
E(e^(tx))
= sum over all possible k of P(X=k)e^(tk)
= sum k from 0 to n of p^k (1-p)^(n-k) (n choose k) e^(tk)
= sum k from 0 to n of (pe^t)^k (1-p)^(n-k) (n choose k)
= (pe^t + 1 - p)^n, from the binomial theorem in algebra.
• The thing I get caught up on is the Expected value of Y at . Could someone give me a link to the logic behind E(Y)=p? Specifically, when he talked about the probability of weighted outcomes?
(1 vote) • Why is Y=0 or Y=1?
If Y is not either 0 or 1, what kind of formula should we use?
(1 vote) • In this case we want 𝑌 to represent whether a trial is a success or not.
So it needs to have two outcomes – one for "Success" and one for "Not Success".

The reason we choose the outcomes to be either 0 or 1 is because it allows us to easily count the number of successes after 𝑛 trials:
𝑌₁ + 𝑌₂ + 𝑌₃ + ... + 𝑌ₙ

– – –

The way we define a variable depends on what we want it to represent.

For example, if you and friend were competing in a game you might want to keep track of who has won more often after 𝑛 rounds.

Then we might want to define 𝑌 = −1 if you lose a round, 𝑌 = 0 if a round ends in a draw, and 𝑌 = 1 if you win a round.

If the sum 𝑌₁ + 𝑌₂ + 𝑌₃ + ... + 𝑌ₙ is negative you lost more rounds than you won,
If the sum is 0, then both of you won equally often.
And if the sum is positive you won more rounds than you lost. 