# Common Errors in Statistics and How to Avoid Them - Good P.I

**Download**(direct link)

**:**

**22**> 23 24 25 26 27 28 .. 90 >> Next

A decision procedure d based on a statistic T is admissible with respect to a given loss function L, provided that there does not exist a second procedure d* whose use would result in smaller losses whatever the unknown population distribution.

The importance of admissible procedures is illustrated in an expected way by Steins paradox. The sample mean, which plays an invaluable role as an estimator of the population mean of a normal distribution for a single set of observations, proves to be inadmissible as an estimator when we have three or more independent sets of observations to work with. Specifically, if {Xj} are independent observations taken from four or more distinct normal distributions with means 0i and variance 1, and losses are proportional to the square of the estimation error, then the estimators

Q = X + (1 -[[- 3]/S2)() -X..), where S2 =?k-i(X. -X..)2,

have smaller expected losses than the individual sample means, regardless of the actual values of the population means (see Efron and Morris [1977]).

CHAPTER 4 ESTIMATION 49

SUMMARY

Desirable estimators are impartial, consistent, efficient, and robust, and they have minimum loss. Interval estimates are to be preferred to point estimates; they are less open to challenge for they convey information about the estimates precision.

TO LEARN MORE

Selecting more informative endpoints is the focus of Berger [2002] and Bland and Altman [1995].

Lehmann and Casella [1998] provide a detailed theory of point estimation.

Robust estimators are considered in Huber [1981], Maritz [1996], and Bickel et al. [1993]. Additional examples of both parametric and nonparametric bootstrap estimation procedures may be found in Efron and Tibshirani [1993]. Shao and Tu [1995, Section 4.4] provide a more extensive review of bootstrap estimation methods along with a summary of empirical comparisons.

Carroll and Ruppert [2000] show how to account for differences in variances between populations; this is a necessary step if one wants to take advantage of Stein-James-Efron-Morris estimators.

Bayes estimators are considered in Chapter 6.

50 PART II HYPOTHESIS TESTING AND ESTIMATION

Chapter 5

Testing Hypotheses: Choosing a Test Statistic

Forget large-sample methods. In the real world of experiments samples are so nearly always small that it is not worth making any distinction, and small-sample methods are no harder to apply. George Dyke [1997].

Every statistical procedure relies on certain assumptions for

correctness. Errors in testing hypotheses come about either because the assumptions underlying the chosen test are not satisfied or because the chosen test is less powerful than other competing procedures. We shall study each of these lapses in turn.

First, virtually all statistical procedures rely on the assumption that the observations are independent.

Second, virtually all statistical procedures require at least one of the following successively weaker assumptions be satisfied under the null hypothesis:

1. The observations are identically distributed.

2. The observations are exchangeable; that is, their joint distribution is the same for any relabeling.

3. The observations are drawn from populations in which a specific parameter is the same across the populations.

The first assumption is the strongest assumption. If it is true, the following two assumptions are also true. The first assumption must be true for a parametric test to provide an exact significance level. If the second assumption is true, the third assumption is also true. The second assumption must be true for a permutation test to provide an exact significance level.

The third assumption is the weakest assumption. It must be true for a bootstrap test to provide an exact significance level asymptotcally.

CHAPTER 5 TESTING HYPOTHESES: CHOOSING A TEST STATISTIC 51

TABLE 5.1 Types of Statistical Tests of Hypotheses

Test Type Definition Example

Exact Stated significance level is exact, t test when observations

not approximate. are i.i.d. normal; permutation test when observations are exchangeable.

Parametric Obtains cutoff points from specific parametric distribution. t test

Nonparametric Obtains cutoff points from

bootstrap percentiles of bootstrap distribution of parameter.

Parametric Obtains cutoff points from

bootstrap percentiles of parameterized bootstrap distribution of parameter.

Permutation Obtains cutoff points from Tests may be based upon

distribution of test statistic the original observations,

obtained by rearranging on ranks, on normal or

labels. Savage scores, or on U statistics.

An immediate consequence of the first two assumptions is that if observations come from a multiparameter distribution, then all parameters, not just the one under test, must be the same for all observations under the null hypothesis. For example, a t test comparing the means of two populations requires that the variation of the two populations be the same.

For nonparametric and parametric bootstrap tests, under the null hypothesis, the observations must all come from a distribution of a specific form.

**22**> 23 24 25 26 27 28 .. 90 >> Next