in black and white
Main menu
Share a book About us Home
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics

Introduction to Bayesian statistics - Bolstad M.

Bolstad M. Introduction to Bayesian statistics - Wiley Publishing, 2004. - 361 p.
ISBN 0-471-27020-2
Download (direct link): introductiontobayesianstatistics2004.pdf
Previous << 1 .. 57 58 59 60 61 62 < 63 > 64 65 66 67 68 69 .. 126 >> Next

interval, and vice versa. The confidence interval "summarizes" all possible null hypotheses that would be accepted if they were tested.
Bayesian Test of a Two-Sided Hypothesis
From the Bayesian perspective, the posterior distribution of the parameter given the data sums up our entire belief after the data. However, the idea of hypothesis testing as a protector of scientific credibility is well established in science. So we look at using the posterior distribution to test a point null hypothesis versus a two-sided alternative in a Bayesian way.
If we use a continuous prior, we will get a continuous posterior. The probability of the exact value represented by the point null hypothesis will be zero. We can’t use posterior probability to test the hypothesis. Instead, we use a correspondence similar to the one between confidence intervals and hypothesis tests, but with credible interval instead.
Compute a (1 - a) x 100% credible interval for п. If п0 lies inside the credible interval, accept (do not reject) the null hypothesis H0 : п = п0, and if п0 lies outside the credible interval, then reject the null hypothesis.
Example 15 (continued) If we use a uniform prior distribution, the posterior is the beta(10+1,5+1) distribution. A 95% Bayesian credible interval for п found using the normal approximation is
11 x 6
((11 + 6)2 x (11 + 6 + 1))
= .647 ± .221 = (.426,.868).
The null value п = . 5 lies within the credible interval, so we cannot reject the null hypothesis. It remains a credible value.
Main Points
• The posterior distribution of the parameter given the data is the entire inference from a Bayesian perspective. Probabilities calculated from the posterior distribution are post-data because the posterior distribution is found after the observed data has been taken into the analysis.
• Under the frequentist perspective there are specific inferences about the parameter: point estimation, confidence intervals, and hypothesis tests.
• Frequentist statistics considers the parameter a fixed but unknown constant. The only kind of probability allowed is long run relative frequency.
• The sampling distribution of a statistic is its distribution over all possible random samples given the fixed parameter value. Frequentist statistics is based on the sampling distribution.
Y7 + 1.96 x
• Probabilities calculated using the sampling distribution are pre-data because they are based on all possible random samples, not the specific random sample we obtained.
• An estimator of a parameter is unbiased if its expected value calculated from the sampling distribution is the true value of the parameter.
• Frequentist statistics often call the minimum variance unbiased estimator the best estimator.
• The mean squared error of an estimator measures its average squared distance from the true parameter value. It is the square of the bias plus the variance.
• Bayesian estimators are often better than frequentist estimators even when judged by the frequentist criteria such as mean squared error.
• Seeing how a Bayesian estimator performs using frequentist criteria for a range of possible parameter values is called a pre-posterior analysis, because it can be done before we obtain the data.
• A (1 — a) x 100% confidence interval for a parameter в is an interval (l, u) such that
P(l < в < u) = 1 — a,
where the probability is found using the sampling distribution of an estimator for в. The correct interpretation is that (1 — a) x 100% of the random intervals calculated this way do contain the true value. When the actual data are put in and the endpoints calculated, there is nothing left to be random. The endpoints are numbers; the parameter is fixed but unknown. We say that we are (1 — a) x 100% confident that the calculated interval covers the true parameter. The confidence comes from our belief in the method used to calculate the interval. It does not say anything about the actual interval we got for that particular data set.
• A (1 — a) x 100% Bayesian credible interval for в is a range of parameter values that has posterior probability (1 — a).
• Frequentist hypothesis testing is used to determine whether the actual parameter could be a specific value. The sample space is divided into a rejection region and an acceptance region such that the probability the test statistic lies in the rejection region if the null hypothesis is true is less than the level of significance a. If the test statistic falls into the rejection region, we reject the null hypothesis at level of significance a.
• Or we could calculate the p-value. If the p-value< a, we reject the null hypothesis at level a.
• The p-value is not the probability the null hypothesis is true. Rather, it is the probability of observing what we observed, or even something more extreme, given that the null hypothesis is true.
Previous << 1 .. 57 58 59 60 61 62 < 63 > 64 65 66 67 68 69 .. 126 >> Next