Confidence interval
For statistical estimations, the confidence interval defines an upper and lower limit within which the true (population) value is expected to come to lie with a defined probability. This probability is frequently set to 95%, meaning that an error of \(\alpha=5%\) is accepted (other \(\alpha\) are also possible, of course).
In order to be able to build such a confidence interval, the distribution of the estimated values need to be known. It is known in sampling statistics, that the estimated mean follows a normal distribution if the sample is large (n>>30, say), and the t distribution with ν degrees of freedom when the sample is small (n<30, say).
For a defined value \(\alpha\) for the error probability the width of the confidence interval for the estimated mean is given by
\[t_{\alpha v}*S_\bar{y}\]
where \(S_\bar y\) is the standard error and \(t\) comes from the t-distribution and depends on sample size (df = degrees of freedom=n-1) and the error probability. Then,
\[P(\bar y -t S_\bar{y}< \mu <\bar y +t S_\bar{y})=95\%\]
As with the standard error of the mean, the width of the confidence interval (CI) can be given in absolute (in units of the mean value) or in relative terms (in %, relative to the estimated mean).
- Note!:
- If an estimation is accompanied by a precision statement, one must clearly say whether that is the standard error or half the width of the confidence interval!
For larger sample sizes and α=5%, the t-value is \(t_{\alpha=0.05, v>30}=1.96\), that is around 2 so that as a rule of thumb we may say that half the width of the confidence interval is given by twice the standard error \(\bar y \pm 2S_\bar y\). For smaller sample sizes, the t-value will be larger.