Bias
Line 1: | Line 1: | ||
{{Improve}}{{Ficontent}} | {{Improve}}{{Ficontent}} | ||
− | The term “bias” is frequently used in a colloquial sense equivalent to systematic error. In statistical sampling, it is mostly the “estimator bias” that is referred to when using this term. An estimator is biased if it does not approximate the true parametric value of a [[population]] when the [[sample size]] is more and more increased. It is eventually not the estimate that is biased, but the estimator. Any estimate deviates more or less from the true parametric value, but this may just be expression of residual variability also without bias. | + | The term “bias” is frequently used in a colloquial sense equivalent to systematic error. In statistical sampling, it is mostly the “estimator bias” that is referred to when using this term. An [[Estimator|estimator]] is biased if it does not approximate the true [[parametric value]] of a [[population]] when the [[sample size]] is more and more increased. It is eventually not the estimate that is biased, but the estimator. Any estimate deviates more or less from the true parametric value, but this may just be expression of [[residual variability]] also without bias. |
− | If a biased estimator is used, the bias is not reduced nor eliminated by increasing the sample size | + | If a biased estimator is used, the bias is not reduced nor eliminated by increasing the [[sample size]]. |
− | + | ||
An estimator is unbiased, if it produces on the average estimations that equal the population parameter. Formally: | An estimator is unbiased, if it produces on the average estimations that equal the population parameter. Formally: | ||
− | :<math>E( | + | :<math>E(\hat\theta)=\theta</math> |
− | (the expected value of the estimated parameter is equal to the true value of the parameter). If this case is not satisfied, i.e., then the estimator is biased. The bias can be calculated from , | + | (the expected value of the estimated parameter is equal to the true value of the parameter). If this case is not satisfied, i.e., then the estimator is biased. The bias can be calculated from , |
− | + | :<math>B=E(\hat\theta)-\theta</math> | |
− | + | ||
− | If , we have an unbiased estimator and obviously the sample variance and the mean square error are identical . | + | but is usually unknown in a particular sampling study. An empirical interpretation of an unbiased estimator is as follows: if we would take all possible samples from a population following a defined [[:Category:sampling design|sampling design]], then each of these samples would produce one estimation. The mean of all these individual estimations (which is the expected value of the estimation) would finally be equal to the true population parameter. |
− | What | + | |
+ | In the case of a biased estimator, we can usually not separate out the bias; that means, what we determine by calculating the [[error variance]] is the [[Mean Square Error]] that embraces the measure of [[accuracy and precision|statistical precision]] and the bias, resulting in | ||
+ | |||
+ | :<math>MSE(\hat\theta)=V(\hat\theta)+B^2</math> | ||
+ | |||
+ | If B=0, we have an unbiased estimator and obviously the sample variance and the mean square error are identical <math>MSE(\hat\theta)=V(\hat\theta)</math>. | ||
+ | |||
+ | What is described in the above section is the estimator bias. The term “bias” is sometimes also used in other contexts and should then not be confused with estimator bias: if the selection of sample is not strictly at random but certain types of sampling elements are systematically preferred or excluded, then one speaks some times of selection bias. A selection bias may be present when sample trees are selected for convenience along roads because it may be expected that trees at the [[forest edge]] along roads have different characteristics than trees inside the stands. When an observer introduces – for whatever reason – a systematic error into the observations, this is some times called observer bias. This may happen, for example, when damage classes or quality classes are to be assessed that include a certain amount of visual assessment. | ||
Revision as of 16:41, 27 October 2013
Attention!: |
This article must be enhanced to meet the AWF-Wiki quality standards! Please visit the Discussion Page of this article for details! |
The term “bias” is frequently used in a colloquial sense equivalent to systematic error. In statistical sampling, it is mostly the “estimator bias” that is referred to when using this term. An estimator is biased if it does not approximate the true parametric value of a population when the sample size is more and more increased. It is eventually not the estimate that is biased, but the estimator. Any estimate deviates more or less from the true parametric value, but this may just be expression of residual variability also without bias. If a biased estimator is used, the bias is not reduced nor eliminated by increasing the sample size.
An estimator is unbiased, if it produces on the average estimations that equal the population parameter. Formally:
\[E(\hat\theta)=\theta\]
(the expected value of the estimated parameter is equal to the true value of the parameter). If this case is not satisfied, i.e., then the estimator is biased. The bias can be calculated from ,
\[B=E(\hat\theta)-\theta\]
but is usually unknown in a particular sampling study. An empirical interpretation of an unbiased estimator is as follows: if we would take all possible samples from a population following a defined sampling design, then each of these samples would produce one estimation. The mean of all these individual estimations (which is the expected value of the estimation) would finally be equal to the true population parameter.
In the case of a biased estimator, we can usually not separate out the bias; that means, what we determine by calculating the error variance is the Mean Square Error that embraces the measure of statistical precision and the bias, resulting in
\[MSE(\hat\theta)=V(\hat\theta)+B^2\]
If B=0, we have an unbiased estimator and obviously the sample variance and the mean square error are identical \(MSE(\hat\theta)=V(\hat\theta)\).
What is described in the above section is the estimator bias. The term “bias” is sometimes also used in other contexts and should then not be confused with estimator bias: if the selection of sample is not strictly at random but certain types of sampling elements are systematically preferred or excluded, then one speaks some times of selection bias. A selection bias may be present when sample trees are selected for convenience along roads because it may be expected that trees at the forest edge along roads have different characteristics than trees inside the stands. When an observer introduces – for whatever reason – a systematic error into the observations, this is some times called observer bias. This may happen, for example, when damage classes or quality classes are to be assessed that include a certain amount of visual assessment.