in black and white
Main menu
Home About us Share a book
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics

Common Errors in Statistics and How to Avoid Them - Good P.I

Good P.I Common Errors in Statistics and How to Avoid Them - Wiley publishing , 2003. - 235 p.
Download (direct link): ñommonerrorsinstatistics2003.pdf
Previous << 1 .. 31 32 33 34 35 36 < 37 > 38 39 40 41 42 43 .. 90 >> Next

A remedy proposed by the Court is of interest to us:
“If the expert testifies to the defendant’ paternity index or a substantially equivalent statistic, the expert must, if requested, calculate the probability that the defendant is the father by using more than a single assumption about the strength of the other evidence in the case.... If the expert uses various assumptions and makes these assumptions known, the fact finder’s attention will be directed to the other evidence in the case, and it will not be misled into adopting the expert’s assumption as to the correct weight to be assigned the other evidence. The expert should present calculations based on assumed prior probabilities of 0, 10, 20,..., 90 and 100 percent.”23
21 303 Or. 262 (1987).
22 Id. 272.
23 Id. 275.
24 Id. 276, fn 9.
25 Id. 279. See also Kaye [1988].
The courts of many other states have followed Plemmel. “The better practice may be for the expert to testify to a range of prior probabilities, such as 10, 50 and 90 percent, and allow the trier of fact to determine which to use.”26
Applications to Experiments and Clinical Trials
Outside the courtroom, where the rules of evidence are less rigorous, we have much greater latitude in the adoption of a prior distributions for the unknown parameter(s). Two approaches are common:
1. Adopting some synthetic distribution—a normal or a Beta.
2. Using subjective probabilities.
The synthetic approach, though common among the more computational, is difficult to justify. The theoretical basis for an observation having a normal distribution is well known—the observation will be the sum of a large number of factors, each of which makes only a minute contribution to the total. But could such a description be applicable to a population parameter?
Here is an example of this approach taken from a report by D. A. Berry27: “A study reported by Freireich et al.28 was designed to evaluate the effectiveness of a chemotherapeutic agent 6-mercaptopurine (6-MP) for the treatment of acute leukemia. Patients were randomized to therapy in pairs. Let p be the population proportion of pairs in which the 6-MP patient stays in remission longer than the placebo patient. (To distinguish probability p from a probability distribution concerning p, I will call it a population proportion or a propensity.) The null hypothesis Ho is p = 1/2: no effect of 6-MP. Let H stand for the alternative hypothesis that p > 1/2. There were 21 pairs of patients in the study, and 18 of them favored 6-MP.”
“Suppose that the prior probability of the null hypothesis is 70 percent and that the remaining probability of 30 percent is on the interval (0,1) uniformly. ... So under the alternative hypothesis H1, p has a uniform(0,1) distribution. This is a mixture prior in the sense that it is 70 percent discrete and 30 percent continuous.”
26 County of El Dorado v. Misura, 33 Cal. App. 4th 73 (1995) citing Plemel, supra, at
p. 1219; Peterson (1982 at p. 691, fn. 74), Paternity of M.J.B., 144 Wis.2d 638, 643; State v. Jackson, 320 N.C.452, 455 (1987), and Kammer v. Young, 73Md. App. 565, 571 (1988). See also State v. Spann, 130 N.J. 484 at p. 499 (1993).
27 The full report titled “Using a Bayesian Approach in Medical Device Development” may be obtained from Donald A. Berry at the Institute of Statistics & Decision Sciences and Comprehensive Cancer Center, Duke University, Durham, NC 27708.
28 Blood 1963; 21:699-716.
“The uniform(0,1) distribution is also the beta(1,1) distribution. Updating the beta(a,b) distribution after s successes and f failures is easy, namely, the new distribution is beta(a + s, b + f). So for s = 18 and f = 3, the posterior distribution under H1 is beta(19,4).”
The subjective approach places an added burden on the experimenter.
As always, she needs to specify each of the following:
• Maximum acceptable frequency of Type I errors (that is, the significance level)
• Alternative hypotheses of interest
• Power desired against each alternative
• Losses associated with Type I and Type II errors
With the Bayesian approach, she must also provide a priori probabilities.
Arguing in favor of the use of subjective probabilities is that they permit incorporation of expert judgment in a formal way into inferences and decision-making. Arguing against them in the words of the late Edward Barankin, “How are you planning to get these values—beat them out of the researcher?” More appealing, if perhaps no more successful, approaches are described by Good [1950] and Kadane et al. [1980].
Bayes' Factor
An approach that allows us to take advantage of the opportunities Bayes’ Theorem provides while avoiding its limitations and the objections raised in the courts is through the use of the minimum Bayes’ factor introduced by Edwards et al. [1963].
The Bayes factor is a measure of the degree to which the data from a study moves us from our initial position. Let B denote the odds we put on the primary hypothesis before we examine the data, and let A be the odds we assign after seeing the data; the Bayes factor is defined as A/ B.
Previous << 1 .. 31 32 33 34 35 36 < 37 > 38 39 40 41 42 43 .. 90 >> Next