|
|
(22 intermediate revisions by 2 users not shown) |
Line 1: |
Line 1: |
− | {{Content Tree|HEADER=Forest Inventory lecturenotes|NAME=Forest Inventory lecturenotes}} | + | {{Ficontent}} |
| + | Mostly, one speaks about [[simple random sampling|random sampling]] with equal selection probabilities: each element of the [[population]] has the same probability to be selected. However, there are situations in which this idea of equal selection probabilities does not appear reasonable: if it is known that some elements carry much more information about the [[target variable]], they should also have a greater chance to be selected. [[Stratified sampling|Stratification]] goes into that direction: there, the [[Inclusion probability]] within the strata are the same, but could be different between strata. |
| | | |
| + | Sampling with unequal selection probabilities is still random sampling, but not [[simple random sampling]], but “random sampling with unequal selection probabilities”. These selection probabilities, of course, must be defined for each and every element of the population before sampling and none of the population elements must have a selection probability of 0. |
| | | |
− | ==Introduction==
| + | Various [[:category:sampling design|sampling strategies]] that are important for forest inventory base upon the principle of unequal selection probabilities, including |
| | | |
− | <br>
| + | *angle count sampling ([[Bitterlich sampling]]), |
− | Mostly, one speaks about random sampling with equal selection probabilities: each element of the population has the same probability to be selected. However, there are situations in which this idea of equal selection probabilities does not appear reasonable: if it is known that some elements carry much more information about the target variable, they should also have a greater chance to be selected. Stratification goes into that direction: there, the selection probabilities within the strata were the same, but could be different between strata.
| + | |
| | | |
− | Sampling with unequal selection probabilities is still random sampling, but not simple random sampling, but “random sampling with unequal selection probabilities”. These selection probabilities, of course, must be defined for each and every element of the population before sampling and none of the population elements must have a selection probability of 0.
| + | *[[importance sampling]], |
| | | |
− | Various sampling strategies that are important for forest inventory base upon the principle of unequal selection probabilities, including
| + | *[[3 P sampling]], |
| | | |
− | | + | *[[randomized branch sampling]]. |
− | *angle count sampling (Bitterlich sampling),
| + | |
− | | + | |
− | | + | |
− | *importance sampling,
| + | |
− | | + | |
− | | + | |
− | *3 P sampling,
| + | |
− | | + | |
− | | + | |
− | *randomized branch sampling. | + | |
| | | |
− |
| |
− | After a general presentation of the statistical concept and estimators, these applications are addressed.
| |
− |
| |
| In unequal probability sampling, we distinguish two different probabilities – which actually are two different points of view on the sampling process: | | In unequal probability sampling, we distinguish two different probabilities – which actually are two different points of view on the sampling process: |
| | | |
− |
| + | The selection probability is the probability that element ''i'' is selected at one draw (selection step). The [[Hansen-Hurwitz estimator]] for sampling with replacement (that is; when the selection probabilities do not change after every draw) bases on this probability. The notation for selection probability is written as <math>P_i</math> or <math>p_i</math>. |
− | The selection probability is the probability that element ''i'' is selected at one draw (selection step). The Hansen-Hurwitz estimator for sampling with replacement (that is; when the selection probabilities do not change after every draw) bases on this probability. The notation for selection probability is written as <math>P_i</math> or <math>p_i</math>. | + | |
− |
| + | |
| | | |
| + | The [[inclusion probability]] refers to the probability that element ''i'' is eventually (or included) in the sample of size ''n''. The [[Horvitz-Thompson estimator]] bases on the inclusion probability and is applicable to sampling with or without replacement. The inclusion probability is generally denoted by <math>\pi_i</math>. |
| | | |
− | The inclusion probability refers to the probability that element ''i'' is eventually (or included) in the sample of size ''n''. The Horvitz-Thompson estimator bases on the inclusion probability and is applicable to sampling with or without replacement. The inclusion probability is generally denoted by <math>\pi</math>.
| + | {{info |
− | | + | |message=obs: |
− | | + | |text=A typical example for sampling with equal inclusion probabilities is given with fixed area [[fixed area plots|sample plots]] in forest inventories. With this concept and under the assumption that sample points are randomly distributed over an area of interest, each tree has the same probability to become part of a sample. Contrary to this constant [[inclusion probability]] it is possible to weight the probability proportional to a meaningful variable. Imagine e.g. different plot sizes for different tree dimensions. If bigger trees are observed in larger plots and smaller trees in smaller plots, their probability to be included in a sample is not constant anymore. This weighting is in particular efficient, if the inclusion probability is proportional to the respective target variable(like e.g. in relascope sampling) |
− | | + | |
− | ==List sampling = PPS sampling==
| + | |
− | | + | |
− | <br>
| + | |
− |
| + | |
− | If sampling with unequal selection probabilities is indicated, the probabilities need to be determined for each element before sampling can start. If a size variable is available, the selection probabilities can be calculated proportional to size. This is then called PPS sampling ('''p'''robability '''p'''roportional to '''s'''ize).
| + | |
− | | + | |
− | | + | |
− | <blockquote>
| + | |
− | {| | + | |
− | | width="800pt" align="left" | '''Table 1.''' Listed sampling frame as used for „list sampling” where the selection probability is determined proportional to size.
| + | |
− | {|cellspacing="0" border="1" cellpadding="5" | + | |
− | |- | + | |
− | | width="200pt" align="center" | Population element
| + | |
− | | width="200pt" align="center" | List of the size variables of the population elements | + | |
− | | width="200pt" align="center" | List of cumulative sums
| + | |
− | | width="200pt" align="center" | Assigned range
| + | |
− | |-
| + | |
− | | width="200pt" align="center" | 1
| + | |
− | | width="200pt" align="center" | 10
| + | |
− | | width="200pt" align="center" | 10
| + | |
− | | width="200pt" align="center" | 0 - 10
| + | |
− | |-
| + | |
− | | width="200pt" align="center" | 2
| + | |
− | | width="200pt" align="center" | 20
| + | |
− | | width="200pt" align="center" | 30
| + | |
− | | width="200pt" align="center" | > 10 - 30
| + | |
− | |-
| + | |
− | | width="200pt" align="center" | 3
| + | |
− | | width="200pt" align="center" | 30
| + | |
− | | width="200pt" align="center" | 60
| + | |
− | | width="200pt" align="center" | > 30 - 60
| + | |
− | |-
| + | |
− | | width="200pt" align="center" | 4
| + | |
− | | width="200pt" align="center" | 60
| + | |
− | | width="200pt" align="center" | 120
| + | |
− | | width="200pt" align="center" | > 60 - 120
| + | |
− | |-
| + | |
− | | width="200pt" align="center" | 5
| + | |
− | | width="200pt" align="center" | 100
| + | |
− | | width="200pt" align="center" | 220
| + | |
− | | width="200pt" align="center" | > 120 - 220
| + | |
− | |}
| + | |
− | |}
| + | |
− | </blockquote>
| + | |
− | | + | |
− | | + | |
− | | + | |
− |
| + | |
− | This sampling approach is also called list sampling because the selection can most easily be explained by listing the size variables and select from the cumulative sum with uniformly distributed random numbers (which perfectly simulates the unequal probability selection process). This is illustrated in Table 1: the size variables of the 5 elements are listed (not necessarily any order!) and the cumulative sums calculated. The, uniformly distributed random number is drawn between the lowest and highest possible value of that range, that is from 0 to the total sum.
| + | |
− | Assume, for example, the random number 111.11 is drawn; this falls into the range “>60 – 120” so that element 4 is selected. Obviously, the elements have then a selection probability proportional to the size variable.
| + | |
− | | + | |
− | | + | |
− | | + | |
− | ==Hansen-Hurwitz estimator==
| + | |
− | | + | |
− | <br>
| + | |
− | | + | |
− | The Hansen-Hurwitz estimator gives the framework for all unequal probability sampling with replacement (Hansen and Hurwitz, 1943). “With replacement” means that the selection probabilities are the same for all draws; if selected elements would not be replaced (put back to the population), the selection probabilities would change after each draw for the remaining elements.
| + | |
− | | + | |
− | Suppose that a sample of size n is drawn with replacement and that on each draw the probability of selecting the i-th unit of the population is <math>p_i</math>.
| + | |
− | | + | |
− | Then the Hansen-Hurwitz estimator of the population total is
| + | |
− | | + | |
− | | + | |
− | ::<math>\hat \tau = \frac {1}{n} \sum_{i=1}^n \frac {y_i}{p_i}</math>
| + | |
− | | + | |
− | | + | |
− | Here, each observation <math>y_i</math> is weighted by the inverse of its selection probability <math>p_i</math>.
| + | |
− | | + | |
− | | + | |
− | The parametric variance of the total is
| + | |
− | | + | |
− | | + | |
− | ::<math>var (\hat \tau) = \frac {1}{n} \sum_{i=1}^N p_i \left (\frac {y_i}{p_i} - \tau \right )^2</math>
| + | |
− | | + | |
− | | + | |
− | which is unbiasedly estimated from a sample size ''n'' from
| + | |
− | | + | |
− | | + | |
− | ::<math>v\hat ar (\hat \tau) = \frac {1}{n} \frac {\sum_{i=1}^n \left (\frac {y_i}{p_i} - \tau \right )^2}{n-1}</math>
| + | |
− | | + | |
− | | + | |
− | | + | |
− | {{Exercise
| + | |
− | |message=Hansen-Hurwitz estimator examples
| + | |
− | |text=4 application examples
| + | |
| }} | | }} |
− |
| |
− |
| |
− |
| |
− | ==Horvitz-Thompson estimator==
| |
− |
| |
− | <br>
| |
− | Assuming that with any design, with or without replacement, the probability of including unit ''i'' in the sample is <math>\pi_i</math> (>0), for ''i=1,2,…, N''. The inclusion probability <math>\pi_i</math> can be calculated from the selection probability <math>p_i</math> and the corresponding complementary probability (1-''p<sub>i</sub>''), which is the probability that the element is not included into the sample at a particular draw.
| |
− |
| |
− |
| |
− | After ''n'' sample draws, the probability that element ''i'' is eventually included into the sample is <math>\pi</math>=1 - (1-''p<sub>i</sub>)<sup>n</sup>'', where (1 - ''p<sub>i</sub>'')<sup>''n''</sup> is the probability that the particular element is not included after ''n'' draws; the complementary probability to this is then the probability that the element is eventually in the sample (at least selected once).
| |
− |
| |
− |
| |
− | The Horvitz-Thompson estimator can be applied for sampling with or without replacement, but here it is illustrated for the case with replacement.
| |
− |
| |
− |
| |
− | For the variance calculation with the Horvitz-Thompson estimator we also need to know the joint inclusion probability <math>\pi_{ij}</math> of two elements ''i'' and ''j'' after ''n'' sample draws, that is the probability that both ''i'' and ''j'' are eventually in the sample, after ''n'' draws. This joint inclusion probability is calculated from the two selection probabilities and the two inclusion probabilities after <math>\pi_{ij} = \pi_i + \pi_j - \{ 1 - (1 - p_i - p_j)^n \} </math> and can be illustrated as in Figure 1.
| |
− |
| |
− |
| |
− | [[image:SkriptFig_100.jpg|thumb|1000px|'''Figure 1.''' Diagram illustrating the joint inclusion probability.]]
| |
− |
| |
− |
| |
− | The Horvitz-Thompson estimator for the total is <math>\hat \tau = \sum_{i=1}^\nu \frac {y_i}{\pi_i}</math>
| |
− |
| |
− |
| |
− | where the sum goes over the <math>\nu</math> distinc elements (where <math>\nu</math> is the Greek letter ''nu'') in the sample of size ''n'' (and not over all ''n'' elements)
| |
− |
| |
− |
| |
− | The parametric error variance of the total is
| |
− |
| |
− |
| |
− | ::<math>var(\hat \tau)=\sum_{i=1}^\nu \left (\frac {1 - \pi_i}{\pi_i} \right ) y_i^2 + \sum_{i=1}^N \sum_{j \ne i} \left (\frac {\pi_{ij} - \pi_i \pi_j}{\pi_i \pi_j} \right ) y_i y_j</math>
| |
− |
| |
− |
| |
− | which is estimated by
| |
− |
| |
− |
| |
− | ::<math>v\hat ar(\hat \tau)=\sum_{i=1}^\nu \left (\frac {1 - \pi_i}{\pi_i^2} \right ) y_i^2 + \sum_{i=1}^N \sum_{j \ne i} \left (\frac {\pi_{ij} - \pi_i \pi_j}{\pi_i \pi_j} \right ) \frac {y_i y_j}{\pi_{ij}}</math>
| |
− |
| |
− |
| |
− | A simpler (but slightly biased) approximation for the estimated error variance of the total is
| |
− |
| |
− |
| |
− | ::<math>v\hat ar(\hat \tau) = \frac {N - \nu}{N} \frac {1}{\nu} \frac {\sum_{i=1}^\nu (\tau_i -\hat \tau)^2}{\nu - 1}</math>
| |
− |
| |
− |
| |
− | where <math>\tau_i</math> is the estimation for the total that results from each of the <math>\nu</math> sample.
| |
− |
| |
− |
| |
− | {{Exercise
| |
− | |message=Horvitz-Thompson estimator example
| |
− | |text=application example
| |
− | }}
| |
− |
| |
− |
| |
− |
| |
− | ==Bitterlich sampling==
| |
− |
| |
− | <br>
| |
− |
| |
− | For the inclusion zone approach where for each tree an inclusion zone is defined, see the corresponding article "[Inclusion zone approache]". If used,the inclusion probability is then proportional to the size of this inclusion zone – which actually defines the probability that the correspondent tree is included in a sample.
| |
− |
| |
− |
| |
− | We saw, for example, that angle count sampling (Bitterlich sampling) selects the trees with a probability proportional to their basal area and we emphasized that this fact makes Bitterlich sampling so efficient for basal area estimation. In contrast, point to tree distance sampling, or k-tree sampling, has inclusion zones that do not depend on any individual tree characteristic but only on the spatial arrangement of the neighboring trees; therefore, point-to tree distance sampling is not particularly precise for any tree characteristic.
| |
− |
| |
− |
| |
− |
| |
− | In Bitterlich sampling, the selection probability of a particular tree ''i'' results from the inclusion zone ''F<sub>i</sub>'' and the size of the reference area, for example the hectare
| |
− |
| |
− |
| |
− | ::<math>\pi_i = \frac {F_i}{10000}</math>
| |
− |
| |
− |
| |
− | with the Horvitz-Thompson estimator, we have the total
| |
− |
| |
− |
| |
− | ::<math>\hat \tau = \sum_{i=1}^m \frac {y_i}{\pi_i}</math>
| |
− |
| |
− |
| |
− | for any tree atribute <math>y_i</math>. Applied to estimating basal area <math>y_i = g_i = \frac {\pi}{4} d_i^2</math> and its per hectare estimation, we have
| |
− |
| |
− |
| |
− | ::<math>\hat \tau = \sum_{i=1}^m \frac {y_i}{\pi_i} = \sum_{i=1}^m \cfrac {\cfrac {\pi}{4} d_i^2}{\cfrac {F_i}{10000}}</math>
| |
− |
| |
− |
| |
− | and with <math> F_i = \pi r_i^2 = \pi c^2 \, d_i^2</math>, we have the same as [[Bitterlich]]
| |
− |
| |
− |
| |
− | ::<math>\hat \tau = \sum_{i=1}^m \frac {y_i}{\pi_i} = \sum_{i=1}^m \cfrac {\cfrac {\pi}{4} d_i^2}{\cfrac {\pi c^2 \, d_i^2}{10000}} = \frac {2500 \pi}{\pi c^2} \sum_{i=1}^m \frac {d_i^2}{d_i^2} = \frac {2500}{c^2} m</math>
| |
− |
| |
− |
| |
− | which is the estimated basal area per hectare from one sample point where ''m'' trees were tallied. The factor 2500/c² is the ''basal area factor'', for details see [[Bitterlich]].
| |
− |
| |
− |
| |
− | <br>
| |
− |
| |
− | ==Importance sampling==
| |
− |
| |
− | <br>
| |
− |
| |
− |
| |
− | Importance sampling is a sampling strategy that selects samples proportional to size – but not from a discrete population of single elements of which each has a selection probability. Importance sampling is applicable to continuous populations where the size attribute is a function from which a probability density function is derived.
| |
− |
| |
− |
| |
− | Typical application in forestry is estimating individual tree volume by sampling the taper curve: we imagine a taper curve is given, as for example, in Figure 2.
| |
− |
| |
− |
| |
− | If A(''h'') is a function of basal area over height, the stem volume from the bottom to an upper height value <math>H_u</math> can be determined from
| |
− |
| |
− |
| |
− | ::<math>\int_{0}^{H_u} A(h) dh</math>.
| |
− |
| |
− |
| |
− | This integral is now to be estimated by selecting some heights at which basal area measurements are taken. One could select simple uniformly distributed height values and thus assigning the same selection probabilities to low height values where there is a lot of wood volume and the upper height values where there is much less volume. It makes, obviously, sense to use unequal selection probabilities that are continuously decreasing from the bottom to the top of the stem.
| |
− |
| |
− |
| |
− | To do that, we must develop a scheme how to define the selection probabilities. In list sampling for discrete elements, we could craft a list and assign selection probabilities proportional to an ancillary size variable. With a continuous population we must devise a continuous function from which to sample with unequal probabilities. It would be optimal to know the exact taper curve, because then, we would make a perfect estimate of the target variable volume or area below the curve (just as we would make a perfect estimate of the totals with the Hansen-Hurwitz estimator if the selection probabilities can be defined strictly proportional to the target variable). As we do not know the taper curve, we use a proxy. Figure 101 shows various options together with the true taper curve of a sample tree. To build the proxy probability density function one needs input information; what we usually have is dbh and height, so that the proxy taper function goes through these points, where the curve intersects with the abscissa at tree height (tree radius = 0).
| |
− |
| |
− |
| |
− |
| |
− | A probability density function (pdf) must have various properties:
| |
− |
| |
− |
| |
− | *it must have positive values on the interval ;
| |
− |
| |
− |
| |
− | *it must be 0 outside that interval;
| |
− |
| |
− |
| |
− | *and the integral on the range <math>[H_b , H_u]</math> must be 1.
| |
− |
| |
− |
| |
− | All these conditions, by the way, are also satisfied when simple random sampling is applied. If the range of possible values is from 1…R, then the probability density function is a parallel to the abscissa intersecting the ordinate at the value 1/''R''; by that, it is guaranteed that the total probability density under the curve is 1.0.
| |
− |
| |
− |
| |
− | [[image:SkriptFig_101.jpg|thumb|1000px|'''Figure 2.''' Plot of height at stem against basal area.]]
| |
− |
| |
− |
| |
− |
| |
− | A linear pdf is possible (''r''=4 in Figure 2). If is stem length (or total height), then the linear ''pdf'' takes on the form
| |
− |
| |
− |
| |
− | ::<math>f(h) = \frac {2}{H_u} - \frac {2}{H_u^2} h </math>,
| |
− |
| |
− |
| |
− | being defined on the range [0..<math>H_u</math>].
| |
− |
| |
− |
| |
− | While the linear model works nicely in many cases, frequently a better approximation can be achieved by curves such as those of the form
| |
− |
| |
− |
| |
− | ::<math> d(h) = D \left [ \frac {H-h}{H} \right ]^{\frac {2}{r}}</math>
| |
− |
| |
− |
| |
− | Three examples for different values of the coefficient ''r'' are depicted in Figure 2.
| |
− |
| |
− | If we select ''n'' sample heights <math>\theta_i</math> according to the ''pdf'' <math>f(\theta_i)</math> and measure there basal area <math>A(\theta_i)</math>, then the volume ''V'' of that particular tree is estimated by the Hansen-Hurwitz estimator
| |
− |
| |
− |
| |
− | ::<math>V = \frac {1}{n} \sum_{i=1}^n \frac {a(\theta_i}{f(\theta_i)}</math>.
| |
− |
| |
− |
| |
− |
| |
− | We denote with <math>V_p</math> the volume that results from the proxy function <math>A_p (h)</math> on the interval from 0 to H<sub>u</sub>. It is a biased volume as <math>A_p (h)</math> is but a proxy for the true function of basal area over height. The probability density function f(h) is then for
| |
− |
| |
− |
| |
− | ::<math>0 \le h \le H_u \, f(h) = \frac {A_p (h)}{V_p}</math>
| |
− |
| |
− |
| |
− | Then, the volume estimation from measurements at ''n'' Heights at the stem - selected according to the ''pdf f(h)'' - can be re-written as
| |
− |
| |
− |
| |
− | ::<math>\hat V = V_p \frac {1}{n}\sum_{i=1}^n \frac {A(\theta_i)}{A_p(\theta_i)}</math>,
| |
− |
| |
− |
| |
− | where the expression to the right can be interpreted as a "calibration factor" which makes the estimation V<sub>p</sub> unbiased.
| |
− |
| |
− |
| |
− | The parametric error variance of volume estimation from a sample of size ''n'' is
| |
− |
| |
− |
| |
− | ::<math>var(\hat V) = \frac {1}{n} \int_{H_U}^{H_O} f(h) \left [ \frac {A(h)}{f(h)} - V \right ]^2 dh = \frac {1}{n} \int_{H_U}^{H_O} \frac {A^2(h)}{f(h)} dh - V\,</math> esttimated from a sample of size ''n'' from
| |
− |
| |
− |
| |
− | ::<math>v\hat ar(\hat V) = \frac {1}{n(n-1)} \sum_{i=1}^n \left [ \frac {A(\theta_i)}{f(\theta_i)} - \hat V \right ]^2</math>.
| |
− |
| |
− |
| |
− |
| |
− | For illustration: for a sampling study, the taper curve of various trees was accurately determined by many measurements. Then, it is possible to simulate different sampling approaches for the estimation of stem volume (Kleinn 1993). This was done for several hundred sample trees (spruce and Douglas fir). Then, the performance of different proxy functions (which define the unequal selection probabilities) was compared. The results are presented in Table 25. With simple random sampling the per-tree volume estimation with n = 1 has here a relative standard error of about 70% - which can, of course, only be determined by simulation, as a single sample of n = 1 does not allow estimating error variance. A linear probability density function (defined by tree height and the default measurement at breast height) yields a reduction of the relative standard error down to about 17%, which can still be improved by using a curvilinear probability density function (''r''=3 along the function given above; see also Table 2).
| |
− |
| |
− |
| |
− | <blockquote>
| |
− | {|
| |
− | | width="800pt" align="left" |'''Table 2.''' Result from a simulation study on several hundred of trees (spruce and Douglas fir). Given is the mean relative error (cv%) of the volume estimate for importance sampling of individual trees with one measurement per tree (''n''=1) (from Kleinn 1993). The estimations are given for different approaches to unequal probability sampling where the function <math>d(h) = D \left [ \frac {H-h}{H} \right ]^{\frac {2}{r}}</math> was used to define the shape of the proxy probability function. “Uniform” means simple random sampling from a uniform distribution of random numbers.
| |
− | {| border="1" cellpadding="5" cellspacing="0"
| |
− | |-
| |
− | ! width="200pt" align="center" | Species
| |
− | ! width="200pt" align="center" | Uniform
| |
− | ! width="200pt" align="center" | Linear pdf
| |
− | ! width="200pt" align="center" colspan="2"| Pdf from proxy fuction with
| |
− | |-
| |
− | |
| |
− | |
| |
− | |
| |
− | |align="center" | '''r=3'''
| |
− | |align="center" | '''r=5'''
| |
− | |-
| |
− | | align="center" | Norway spruce
| |
− | | align="center" | 69.8
| |
− | | align="center" | 17.8
| |
− | | align="center" | 12.9
| |
− | | align="center" | 25.0
| |
− | |-
| |
− | | align="center" | Douglas fir
| |
− | | align="center" | 70.2
| |
− | | align="center" | 16.2
| |
− | | align="center" | 9.8
| |
− | | align="center" | 24.5
| |
− | |}
| |
− | |}
| |
− | </blockquote>
| |
− |
| |
− | <br>
| |
− |
| |
− | ==Randomized branch sampling==
| |
− |
| |
− | <br>
| |
− |
| |
− | Total tree bark volume is a variable that cannot easily be directly measured. The “true” volume could theoretically be determined by stripping off all bark and using water displacement to measure volume. However, this is impractical and the obvious way to go is to develop simple models based on pragmatic sampling techniques.
| |
− |
| |
− |
| |
− | To sample variables such as bark we imagine the tree as a population of above ground N stem and branch sections where each section goes from one fork (or node) to the next – except for the bottom and top sections at which the tree begins and ends, respectively. From this set of N sections we would then select n sections as sample.
| |
− |
| |
− |
| |
− | Doing so by simple random sampling (SRS), for example, we could directly estimate the mean bark volume per section. However, for estimation of the total, we would then face the problem that we needed to know the population size, i.e. the total number of sections to determine the expansion factor to extrapolate the mean section estimate to the whole tree. If the population size is known, we also know the selection probability for each section ‑ 1/N for simple random sampling ‑: this selection probability is required to develop an unbiased estimator for any design based sampling strategy. This is what we call probabilistic sampling.
| |
− |
| |
− |
| |
− | In addition, to be able to carry out simple random selection, we also need to define the sampling frame so that we can unambiguously identify individual sampling elements (sections, in our case). Both tasks (finding the population size and then defining the sampling frame), are clearly impractical for estimating total tree bark utilizing a simple random sampling approach.
| |
− |
| |
− |
| |
− | Randomized branch sampling (RBS) is a sampling strategy that facilitates the drawing of a probabilistic sample without ''a priori'' defining the sampling frame. The selection probabilities of the selected population elements are determined in the course of the sampling process itself. RBS was developed by Jessen (1955)<ref>Jessen R.J. 1955. Determining the fruit count on a tree by randomized branch sampling. Biometrics 11:99-109</ref> for estimation of fruit count in orchards and has since been successfully applied to estimation of various tree variables (e.g. Valentine ''et al.'' 1984<ref> Valentine TV, LM Tritton and GM Furnival. 1984. Subsampling Trees for Biomass, Volume or Mineral Content. Forest Science 30(3):673-681</ref>, Gregoire ''et al.'' 1995<ref>Gregoire TG, HT Valentine and GM Furnival. 1995. Sampling methods to estimate foliage and other characteristics of individual trees. Ecology 76:1181-1194</ref>, Good ''et al.'' 2001<ref name="Good 2001">Good NM, M Paterson, C Brack and .K Mengersen. 2001. Estimating Tree Component Biomass Using Variable Probability Sampling Methods. Journal of Agricultural, Biological, and Environmental Statistics 6(2):258–267</ref>, Cancino 2003, Cancino and Saborowski 2005<ref>Cancino J and J Saborowski. 2005. Comparison of randomized branch sampling with and without replacement at the first stage. Silva Fennica 39(2):201-216.</ref>).
| |
− |
| |
− |
| |
− | The principle of RBS can be visualized as a randomized unidirectional walk on a path along the network of stem sections starting from the bottom of the tree or another defined starting point to a defined end point (in our case up to a minimum branch diameter of 5 cm). Going along the path, at each fork a probability-based decision (utilizing random number tables or dice) is made to select the branch along which to proceed. Therefore, for each fork, the selection probability qi for the next section i is known. This permits the calculation of the overall selection probability for each section within the path as the product of the selection probabilities of all preceding sections. In Figure 3, for illustration, the marked outmost section has selection probability <math>p_3 = q_1 * q_2 * q_3</math>. In that case, the first section (the stem) has selection probability <math>q_0 =1</math> and therefore also <math>p_0 = 1</math> because that section is part of all possible sample paths.
| |
− |
| |
− |
| |
− | [[image:SkriptFig_102.jpg|center|thumb|1000px|'''Figure 3.''' Illustration of randomized branch sampling. The path selected here follows the arrows along the branches. For each section its specific selection probability is determined by the random selection carried out at its starting point. The overall selection probability is then calculated as the product of the specific selection probabilities of all preceding sections. The first section (stem) is always “selected”, so that q<sub>0</sub>=1.]]
| |
− |
| |
− |
| |
− |
| |
− | Knowing the selection probability for each section of a path, an estimator for the total of the target variable can be developed using the Hansen-Hurwitz estimator. The total <math>\tau</math> can then be estimated from one path of ''m'' sections <math>y_i</math> selected with probabilities <math>p_i</math> by
| |
− |
| |
− |
| |
− | ::<math> \hat \tau = \frac {1}{m} \sum_{i=1}^m \frac {y_i}{p_i}</math>.
| |
− |
| |
− |
| |
− |
| |
− |
| |
− |
| |
− |
| |
− | <div style = "float:right; margin-left:4em">
| |
− | {| border="1" cellpadding="15"
| |
− | | width="200pt" align="left"|'''Figure 4.''' Illustration of estimation in randomized branch sampling (after Good et al. 2001<ref name="Good 2001" />): for each section level, the observed value (bold rectangle) is expanded to an estimated total value by dividing that value by its selection probability which is indicated here by the arrows. The sum of all expanded values is the estimation of the tree´s total. The stem has selection probability 1 so that no expansion takes place. Obs: The heights of the sections are set equal here while, of course, they vary within and between sections. The width of the section levels is set to 100% here; the absolute values would also vary.
| |
− | |[[image:SkriptFig_103.jpg]]
| |
− | |}
| |
− | </div>
| |
− |
| |
− |
| |
− | Following statistical sampling principles one path provides one independent observation. This observation is composed of several “sub‑observations”, the sections. This is the same principle which is also applied in relascope sampling, where from one sample point various sample trees are included with selection probabilities proportional to their basal area; the sample tree values are then combined to one sample point observation by weighting them according to their individual selection probabilities. For randomized branch sampling the estimation mechanism is illustrated in Figure 4: dividing the observed section value by its per-section selection probability provides an estimation of the total on this section level (Good et al. 2001<ref name="Good 2001" />).
| |
− |
| |
− |
| |
− | If one path constitutes a sample of size ''n = 1'', then more paths need to be selected per tree if estimation of precision is an issue. From ''n'' selected paths we generate ''n'' bark volume estimations <math> \hat V_j</math> the mean of which is taken as best estimate
| |
− |
| |
− |
| |
− | ::<math>\bar V = \frac {1}{n} \sum_{j=1}^n \hat V_j </math>
| |
− |
| |
− |
| |
− | with estimated variance
| |
− |
| |
− |
| |
− | ::<math>v\hat ar (\bar V) = \frac {s^2}{n} = \frac {1}{n} \frac {\sum_{j=1}^n (\hat V_j - \bar V)^2}{(n-1)}</math>.
| |
− |
| |
− | <br>
| |
− | <p>
| |
| | | |
| ==References== | | ==References== |
In unequal probability sampling, we distinguish two different probabilities – which actually are two different points of view on the sampling process: