We perform independent Bernoulli trials, each with the same probability of success , where . Let be the Random variable equal to the total number of successes.

The distribution of is called a Binomial distribution with parameters and . The textbook writes this as .

If we want to show a distribution is binomial, we can simply explain what the number of trials are, what the chance of success is, and that each trial is independent of one another.

Connection to the Bernoulli distribution

If the number of trials is , then the Binomial distribution is the same as the Bernoulli distribution. That is, as well.

PMF

If , then the PMF of is given by

Intuitively, we are asking the probability of getting exactly successful trials out of the . Since the success rate is , we have . Similarly, the failure rate must be , and the number of failed trials is , so we also have .

Expectation

Using the PMF, we can derive the Binomial expectation of . Remember that we can also end up with zero successes, so the summation begins at zero. (Although, and so this term does nothing in the summation.)

This sucks to calculate. Instead, remember that each represents one Bernoulli trial, which either succeeds (with rate ) or fails. We can represent as a sum of Indicator variables, where is the r.v. for . Then by LOE, we have

Variance

As shown above, we can represent as a sum of indicator variables. These indicators each correspond to one Bernoulli trial, and we know they’re all independent, so is given by