in black and white
Main menu
Home About us Share a book
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics

Common Errors in Statistics and How to Avoid Them - Good P.I

Good P.I Common Errors in Statistics and How to Avoid Them - Wiley publishing , 2003. - 235 p.
Download (direct link): ñommonerrorsinstatistics2003.pdf
Previous << 1 .. 39 40 41 42 43 44 < 45 > 46 47 48 49 50 51 .. 90 >> Next

Confidence intervals can be derived from the rejection regions of our hypothesis tests, whether the latter are based on parametric or nonpara-metric methods. Suppose A(6') is a 1 - a level acceptance region for testing the hypothesis 6 = 6'; that is, we accept the hypothesis if our test statistic T belongs to the acceptance region A(6') and reject it otherwise. Let S(X) consist of all the parameter values 6* for which T[X] belongs to the acceptance region A(6*). Then S(X) is a 1 - a level confidence interval for 6 based on the set of observations X = {x1, x2,. . ., xn}.
The probability that S(X) includes 60 when 6 = 60 is equal to the probability that T(X) belongs to the acceptance region of 60 and is greater than or equal to a.
As our confidence 1 - a increases, from 90% to 95%, for example, the width of the resulting confidence interval increases. Thus, a 95% confidence interval is wider than a 90% confidence interval.
By the same process, the rejection regions of our hypothesis tests can be derived from confidence intervals. Suppose our hypothesis is that the odds ratio for a 2 x 2 contingency table is 1. Then we would accept this null hypothesis if and only if our confidence interval for the odds ratio includes the value 1.
A common error is to misinterpret the confidence interval as a statement about the unknown parameter. It is not true that the probability that a parameter is included in a 95% confidence interval is 95%. What is true is that if we derive a large number of 95% confidence intervals, we can expect the true value of the parameter to be included in the computed intervals 95% of the time. (That is, the true values will be included if the assumptions on which the tests and confidence intervals are based are satisfied 100% of the time.) Like the p value, the upper and lower confidence
limits of a particular confidence interval are random variables because they depend upon the sample that is drawn.
Confidence intervals can be used both to evaluate and to report on the precision of estimates (see Chapter 4) and the significance of hypothesis tests (see Chapter 5). The probability the interval covers the true value of the parameter of interest and the method used to derive the interval must also be reported.
In interpreting a confidence interval based on a test of significance, it is essential to realize that the center of the interval is no more likely than any other value, and the confidence to be placed in the interval is no greater than the confidence we have in the experimental design and statistical test it is based upon. (As always, GIGO.)
Multiple Tests
Whether we report p values or confidence intervals, we need to correct for multiple tests as described in Chapter 5. The correction should be based on the number of tests we perform, which in most cases will be larger than the number on which we report.
Very few studies can avoid bias at some point in sample selection, study conduct, and results interpretation. We focus on the wrong endpoints; participants and co-investigators see through our blinding schemes; the effects of neglected and unobserved confounding factors overwhelm and outweigh the effects of our variables of interest. With careful and prolonged planning, we may reduce or eliminate many potential sources of bias, but seldom will we be able to eliminate all of them. Accept bias as inevitable and then endeavor to recognize and report all exceptions that do slip through the cracks.
Most biases occur during data collection, often as a result of taking observations from an unrepresentative subset of the population rather than from the population as a whole. The example of the erroneous forecast of Landon over Roosevelt was cited in Chapter 3. In Chapter 5, we consid-
Acceptance Region, A(0O). Set of values of the statistic T[X] for which we would accept the hypothesis H: 0 = 00. Its complement is called the rejection region.
Confidence Region, S(X).
Also referred to as a confidence interval (for a single parameter) or a confidence ellipse (for multiple parameters). Set of values of the parameter 0 for which given the set of observations X = {x-i, x2, . . ., x„} and the statistic T[X] we would accept the corresponding responding hypothesis.
ered a study that was flawed because of a failure to include planes that did not return from combat.
When analyzing extended time series in seismological and neurological and investigations, investigators typically select specific cuts (a set of consecutive observations in time) for detailed analysis, rather than trying to examine all the data (a near impossibility). Not surprisingly, such “cuts” usually possess one or more intriguing features not to be found in run-of-the-mill samples. Too often, theories evolve from these very biased selections. We expand on this point in Chapter 9 in our discussion of the limitations on the range over which a model may be applied.
Previous << 1 .. 39 40 41 42 43 44 < 45 > 46 47 48 49 50 51 .. 90 >> Next