The best intuitive explanation I have come across for why dividing by N the sum of squared deviations from the sample mean underestimates the population variance is that the sample mean "follows" the sample; i.e. the sample almost always deviates from its own mean less than it deviates from the population mean (and it never deviates more). This is the source of the bias frodo72 alluded to.
This intuitive argument only shows that simply taking the sample average of squared deviations from the sample mean will underestimate the population variance, but it does not at all prove that N/(N − 1) is the right correction factor. I don't know of an intuitive argument for this, but a nice rigorous derivation can be found
here.
| [reply] |
This is no explaination, but it may help. When you concentrate on a sample instead of the entire population, you're doing two estimates: the mean and the variance (square of standard deviation). The issue is that when you estimate the variance, you're subtracting the estimated mean from each item in the sample: you're using an estimation inside another estimation.
Which leads us to the concept of degree of freedom. The sample has N degrees of freedom, i.e. N possibility to be modified: you can have different values for each of the N items. Thus, when you estimate the mean value, you divide by N.
When you estimate the variance, you're using the mean value evaluated over the sample, as said. Given the fact that you're implicitly trusting that mean value to be correct (otherwise you'd not use it to evaluate the variance!), you're stealing a degree of freedom. I mean: if you fix the value of the mean, you can move only (N-1) items, and the N-th will be bound to have a value that leads to the given mean value. Thus, a variance evaluated in this way only takes into account the variations brought by (N-1) items, not N.
Hope that this intuitively helps :)
Flavio
perl -ple'$_=reverse' <<<ti.xittelop@oivalf
Don't fool yourself.
| [reply] |