AP Statistics Lectures
Table of Contents
by Arnold Kling

Confidence Intervals

Suppose that we want to estimate the average height of 17-year-old males. If we measured the height of very male, then we would have an estimate of the true population parameter, m.

Instead, we measure the height of a sample of twenty-five 17-year-old males. This gives us an estimate of m, which we call m^. From this estimate, we construct a confidence interval for the value of m. We say that m has a C percent chance of being m^ plus or minus d, where d is some distance from our estimate.

Estimating confidence intervals is an exercise in "reverse prediction" or inverse probability. That is, we take the result of an experiment and use it to make a "prediction" or inference about the conditions that produced the result.

We first encountered reverse prediction with Bayes' Theorem. Using Bayes' Theorem, we were able to make an informed "prediction" of whether or not Bernie Williams came to bat with a runner in scoring position, given that Williams got a hit. We used the difference between Williams' batting average with runners in scoring position and his batting average without runners in scoring position to make this "prediction."

Here, suppose that our sample gives us a m^ of 68 inches. We now want to "predict" the likelihood that the true population parameter, m, is between, say, 67 and 69 inches.

When we worked with the normal distribution, we showed how to calculate the probability of someone's height falling within some interval. We showed that if we know the mean and standard deviation of height, then we can use the z table to calculate the probability that an individual's height will fall within a certain range.

If we knew the mean and standard deviation for height in the true population , then we could use the normal distribution to predict the probability that the average height in a sample of twenty-five men would fall within a certain range. The only difference in the calculation is that the standard deviation that we would use in the z table would be the standard deviation of individual height divided by 5, which is the square root of n, our sample size.

To calculate a confidence interval, we almost act as if m^ were the true value and m were the sample value. Again, this "reverse" approach is reminiscent of Bayes' theorem.

The Actual Calculation

After all this theory, the calculation is quite simple. We want to know the probability that m is between 67 and 69 inches, given that m^ is 68 inches. Assume that the standard deviation for an individual is 2.5 inches, which means that the standard deviation for the 25-boy sample is 2.5/5, or 0.5 inches.

m can fall into one of three intervals:

  1. less than 67 inches
  2. between 67 and 69 inches
  3. greater than 69 inches

The probability that it falls between 67 and 69 inches is one minus the probability that it falls within one of the other two intervals. Since the standard deviation is 0.5 inches and the mean is 68 inches, we are calculating the probability that a z variable will fall within two standard deviations of zero. Using a calculator or the table, this gives us a probability of
1 - 0.0228 - 0.0228 = .9544

Finally, we are in a position to state a confidence interval for m.

We are 95.44 percent confident that m lies between 67 and 69 inches.

Every statement about a confidence interval has two components to it. First is the confidence level, which in our example is 95.44 percent. Second is a confidence interval, which in our example is plus or minus one inch.

When we narrow the confidence interval we also lower the confidence level. For example, if we narrowed the confidence interval for height to an interval from 67.75 inches to 68.25 inches, the probability that the true average for height falls within that interval would be lower.

The reverse also holds. That is, if we ask for a statement about confidence intervals with a relatively low confidence level, we will get a relatively narrow interval.

Other things equal, a confidence interval will be narrower for a larger sample size. That is because as the sample size gets larger, the standard deviation of the parameter estimate gets smaller. That gives us a tighter confidence interval, which usually is a good thing.