Books
in black and white
Main menu
Home About us Share a book
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Common Errors in Statistics and How to Avoid Them - Good P.I

Good P.I Common Errors in Statistics and How to Avoid Them - Wiley publishing , 2003. - 235 p.
Download (direct link): ñommonerrorsinstatistics2003.pdf
Previous << 1 .. 13 14 15 16 17 18 < 19 > 20 21 22 23 24 25 .. 90 >> Next

Robust
Estimators that are perfectly satisfactory for use with symmetric normally distributed populations may not be as desirable when the data come from nonsymmetric or heavy-tailed populations, or when there is a substantial risk of contamination with extreme values.
When estimating measures of central location, one way to create a more robust estimator is to trim the sample of its minimum and maximum values (the procedure used when judging ice-skating or gymnastics). As information is thrown away, trimmed estimators are less efficient.
In many instances, LAD (least absolute deviation) estimators are more robust than their LS (least square) counterparts.1 This finding is in line with our discussion of the F statistic in the preceding chapter.
Many semiparametric estimators are not only robust but provide for high ARE with respect to their parametric counterparts.
As an example of a semi-parametric estimator, suppose the {X} are independent identically distributed (i.i.d.) observations with distribution Pr{ Xi < x} = F[y - A] and we want to estimate the location parameter A without having to specify the form of the distribution F. If F is normal and the loss function is proportional to the square of the estimation error, then the arithmetic mean is optimal for estimating A. Suppose, on the other hand, that F is symmetric but more likely to include very large or very small values than a normal distribution. Whether the loss function is proportional to the absolute value or the square of the estimation error, the median, a semiparametric estimator, is to be preferred. The median has an ARE relative to the mean that ranges from 0.64 (if the observations really do come from a normal distribution) to values well in excess of 1 for distributions with higher proportions of very large and very small values (Lehmann, 1998, p. 242). Still, if the unknown distribution is “almost” normal, the mean would be far preferable.
If we are uncertain whether or not F is symmetric, then our best choice is the Hodges-Lehmann estimator defined as the median of the pairwise averages
A = mediani? j (Xj + Xi)/2.
1 See, for example, Yoo [2001].
CHAPTER 4 ESTIMATION 43
Its ARE relative to the mean is 0.97 when F is a normal distribution (Lehmann, 1998, p. 246). With little to lose with respect to the mean if F is near normal, and much to gain if F is not, the Hodges-Lehmann estimator is recommended.
Suppose {X} and {Y} are i.i.d. with distributions Pr{ Xt < x] = F[x] and Pr { Yj < y] = F [y - A] and we want to estimate the shift parameter A without having to specify the form of the distribution F. For a normal distribution F, the optimal estimator with least-square losses is
the arithmetic average of the mn differences Yj - X. Means are highly dependent on extreme values; a more robust estimator is given by
Minimum Loss
The value taken by an estimate, its accuracy (that is, the degree to which it comes close to the true value of the estimated parameter), and the associated losses will vary from sample to sample. A minimum loss estimator is one that minimizes the losses when the losses are averaged over the set of all possible samples. Thus its form depends upon all of the following: the loss function, the population from which the sample is drawn, and the population characteristic that is being estimated. An estimate that is optimal in one situation may only exacerbate losses in another.
Minimum loss estimators in the case of least-square losses are widely and well documented for a wide variety of cases. Linear regression with an LAD loss function is discussed in Chapter 9.
Mini-Max Estimators
It’s easy to envision situations in which we are less concerned with the average loss than with the maximum possible loss we may incur by using a particular estimation procedure. An estimate that minimizes the maximum possible loss is termed a mini-max estimator. Alas, few off-the-shelf mini-max solutions are available for practical cases, but see Pilz [1991] and Pinelis [1988].
Other Estimation Criteria
The expected value of an unbiased estimator is the population characteristic being estimated. Thus, unbiased estimators are also consistent estimators.
A = median,(X, - Xt).
44 PART II HYPOTHESIS TESTING AND ESTIMATION
Minimum variance estimators provide relatively consistent results from sample to sample. While minimum variance is desirable, it may be of practical value only if the estimator is also unbiased. For example, 6 is a minimum variance estimator, but offers few other advantages.
Plug-in estimators., in which one substitutes the sample statistic for the population statistic, the sample mean for the population mean, or the sample’s 20th percentile for the population’s 20th percentile, are consistent, but they are not always unbiased or minimum loss.
Always choose an estimator that will minimize losses.
Myth of Maximum Likelihood
The popularity of the maximum likelihood estimator is hard to comprehend. This estimator may be completely unrelated to the loss function and has as its sole justification that it corresponds to that value of the parameter that makes the observations most probable—provided, that is, they are drawn from a specific predetermined distribution. The observations might have resulted from a thousand other a priori possibilities.
Previous << 1 .. 13 14 15 16 17 18 < 19 > 20 21 22 23 24 25 .. 90 >> Next