Bias

From AWF-Wiki
Revision as of 16:17, 27 October 2013 by Fehrmann (Talk | contribs)

Jump to: navigation, search
Attention.png Attention!: 

This article must be enhanced to meet the AWF-Wiki quality standards! Please visit the Discussion Page of this article for details!
Help to improve this article about Bias if you can!

The term “bias” is frequently used in a colloquial sense equivalent to systematic error. In statistical sampling, it is mostly the “estimator bias” that is referred to when using this term. An estimator is biased if it does not approximate the true parametric value of a population when the sample size is more and more increased. It is eventually not the estimate that is biased, but the estimator. Any estimate deviates more or less from the true parametric value, but this may just be expression of residual variability also without bias. If a biased estimator is used, the bias is not reduced nor eliminated by increasing the sample size. “Selection bias” describes a procedure of sample selection where randomization is not fully applied but where subjective sample selection does possibly lead to the preferred selection of population elements with particular characteristics.

An estimator is unbiased, if it produces on the average estimations that equal the population parameter. Formally:

\[E(\left\hat\Phi\right)\]

(the expected value of the estimated parameter is equal to the true value of the parameter). If this case is not satisfied, i.e., then the estimator is biased. The bias can be calculated from , but is usually unknown in a particular sampling study.

An empirical interpretation of an unbiased estimator is as follows: if we would take all possible samples from a population following a defined sampling design, then each of these samples would produce one estimation. The mean of all these individual estimations (which is the expected value of the estimation) would finally be equal to the true population parameter. In the case of a biased estimator, we can usually not separate out the bias; that means, what we determine by calculating the error variance is the Mean Square Error that embraces the measure of statistical precision and the bias, resulting in

If , we have an unbiased estimator and obviously the sample variance and the mean square error are identical . What was described in this section is the estimator bias. The term “bias” is sometimes also used in other contexts and should then not be confused with estimator bias: if the selection of sample is not strictly at random but certain types of sampling elements are systematically preferred or excluded, then one speaks some times of selection bias. A selection bias may be present when sample trees are selected for convenience along roads because it may be expected that trees at the forest edge along roads have different characteristics than trees inside the stands. When an observer introduces – for whatever reason – a systematic error into the observations, this is some times called observer bias. This may happen, for example, when damage classes or quality classes are to be assessed that include a certain amount of visual assessment.

Personal tools
Namespaces

Variants
Actions
Navigation
Development
Toolbox
Print/export