# Common Errors in Statistics and How to Avoid Them - Good P.I

**Download**(direct link)

**:**

**73**> 74 75 76 77 78 79 .. 90 >> Next

3. Let ( b 0, b ) be the maximum likelihood estimate based on the linear logistic model consisting of the variables chosen by forward logistic regression together with the intercept. On Gregorys data, it turned out that

APPENDIX B EXCESS ERROR ESTIMATION IN FORWARD LOGISTIC REGRESSION 177

(, (, j3n, J14, &) = (12.17, -1.83, -1.58, 0.56, -5.17)

The realization hi of Gregorys rule on his 155 chronic hepatitis patients predicts that a new patient with covariate vector t0 will die if his predicted probability of death 0 (t0) is greater than that is,

logit 0(t0) = 12.17- 1.83t0.i7 - 1.58t0.n + 0.56t0,i4 -5.17t0,3 > (3.2)

For the dichotomous problem, we use the criterion

Q(y, h) = 1 if y n h,

= 0 otherwise

The apparent error is q app = 0.136. Figure 1 shows a histogram of B = 400

bootstrap replications of R* = R(F*, F). Recall that each R* was calculated using (2.1), where hi* is the realization of Gregorys rule on the bootstrap sample x*,. . ., x*. The bootstrap estimate of expected excess error was

boot--1 XR* = 0.039.

B b=1

The jackknife and cross-validation estimates were calculated to be

jack = 0.023, rcroSS = 0.019.

Adding expected excess error estimates to the apparent error gives bias-corrected estimates of the error:

qboot = 0.175, q jack = 0.159, q

cross

0.145.

All three estimates require substantial computing time. FORTRAN programs for performing the preceding calculations and the ones in the following section were developed on a PDP-11/34 minicomputer. The cross-validation and jackknife estimates were computed in 1- hours, whereas the 400 bootstrap replications required just under 6 hours. Computers are becoming faster and cheaper, however, and even now it is possible to compute these estimates on very complicated prediction rules, such as Gregorys rule.

Are B = 400 bootstrap replications enough? Notice that R*,. . ., R* is a random sample from a population with mean

178 APPENDIX B EXCESS ERROR ESTIMATION IN FORWARD LOGISTIC REGRESSION

0 10 20 30 40 50

-0.052 1 1 1 1

-0.045 *

-0.039

-0.032 ***

-0.028 *****

-0.019 **

-0.013 ****

-0.006 ********

0.000 **********

0.008 *****************

0.013 **********************

0.019 ***************************

0.026 **********************************

0.032 *******************************************

0.039 ********************************************

0.045 **************************************

0.052 *******************************

0.058 **********************

0.064 *****************************

0.071 ***********************

0.077 ****************

0.084 ***********

0.090 ****

0.097 **

0.103 *

0.110 **

0.116 *

0.129

FIGURE 1 Histogram of Bootstrap Replications for Gregorys Rule. The histogram summarizes the 400 bootstrap replications of R* that are used in estimating the expected excess error of Gregorys rule for predicting death in chronic hepatitis. Values of R* range from -0.045 to 0.116, with mean 0.039, standard deviation 0.027, and quantiles R*05) = -0.006 and R*05) = 0.084.

E-r R (*, F ) = fboot = L,

and variance, say, s2. Figure 1 shows that this population is close to normal, so

\r400 - L\< 2s/4001/2, with high probability. Approximating s2 with

1 400 2 2

<r400 = -r400] = (0.027)

400 1 b-1

gives

1^400 - r*| < 2(0.027)400^2 = 0.0027; so with high probability, r400 is within 0.0027 of r = rboot.

APPENDIX B EXCESS ERROR ESTIMATION IN FORWARD LOGISTIC REGRESSION 179

Before leaving the chronic hepatitis data, I mention that other prediction rules might be used. Examples include more complicated forms of variable selection such as best subset regression and alternative models such as discriminant analysis. Friedman (1977) applied recursive partitioning to these data to obtain a binary-decision tree. I chose to focus attention on the rule based on forward logistic regression because it is the rule actually proposed and used by Gregory, the experimenter. The question of choosing an optimal prediction rule was not my goal.

4. THE PERFORMANCE OF CROSS-VALIDATION, THE JACKKNIFE, AND THE BOOTSTRAP IN SIMULATIONS

In the previous section we saw the cross-validation, jackknife, and bootstrap estimates of expected excess error for Gregorys rule. These estimates give bias corrections to the apparent error. Do these corrections offer real improvements? Introduce the estimators rapp = 0, the zero-correction estimate corresponding to the apparent error, and rideai = E(R), the best constant estimate if we knew the expected excess error E(R). To compare Across, rjack, rboot against these worst and best cases rapp and rideal, we perform some simulations.

To judge the performance of estimators in the simulations, we use two criteria:

the root mean squared error (RMSE) about the excess error, and

the root mean squared error about the expected excess error. Notice that since

RMSEi( R) also measures the performance of the bias-corrected estimate ?app + R as an estimator of the true error qapp + R.

I pause to clarify the distinction between excess error and expected excess error. In the chronic hepatitis problem, the training sample that Gregory observed led to a particular realization (3.2). The excess error is the difference between the true and apparent error of the realized rule hr based on this training sample. The expected excess error averages the

**73**> 74 75 76 77 78 79 .. 90 >> Next