Books
in black and white
Main menu
Home About us Share a book
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Introduction to Bayesian statistics - Bolstad M.

Bolstad M. Introduction to Bayesian statistics - Wiley Publishing, 2004. - 361 p.
ISBN 0-471-27020-2
Download (direct link): introductiontobayesianstatistics2004.pdf
Previous << 1 .. 33 34 35 36 37 38 < 39 > 40 41 42 43 44 45 .. 126 >> Next

5.5 Let X and Y be jointly distributed discrete random variables. Their joint probability distribution is given in the following table:
(a) Calculate the marginal probability distribution of X.
(b) Calculate the marginal probability distribution of Y.
(c) Are X and Y independent random variables? Explain why or why not.
(d) Calculate the conditional probability P(X = 3|Y = 1).
5.6 Let X and Y be jointly distributed discrete random variables. Their joint probability distribution is given in the following table:
(a) Calculate the marginal probability distribution of X.
(b) Calculate the marginal probability distribution of Y.
(c) Are X and Y independent random variables? Explain why or why not.
(d) Calculate the conditional probability P(X = 2|Y = 3).
6
Bayesian Inference for Discrete Random Variables
In this chapter we introduce Bayes’ theorem for discrete random variables. Then we see how we can use it to revise our beliefs about the parameter, given the sample data that depends on the parameter. This is how we will perform statistical inference in a Bayesian manner.
We will consider the parameter to be random variable X, which has possible values xi,.. .,xi. We never observe the parameter random variable. The random variable Y, which depends on the parameter, has possible values yi,..., yJ. We make inferences about the parameter random variable X given the observed value
Y = yj using Bayes’ theorem.
The Bayesian universe consists of the all possible ordered pairs (xi, yj) for i =
1.....1 and j = 1,...,J. This is analogous to the universe we used for joint random variables in the last chapter. However, we will not consider the random variables X and Y the same way. The events (X = xi),..., (X = xi) partition the universe, but we never observe which one has occurred. The event Y = yj is observed.
We know that the Bayesian universe has two dimensions, the horizontal dimension which is observable, and the vertical dimension which is unobservable. In the horizontal direction it goes across the sample space which is the set of all possible values, {yi,..., yJ}, of the observed random variable Y. In the vertical direction it goes through the parameter space, which is the set of all possible parameter values, {xi ,...,xi}. The Bayesian universe for discrete random variables is shown in Table
6.1. This is analogous the Bayesian universe for events described in Chapter 4. The
0Introduction to Bayesian Statistics. By William M. Bolstad ISBN 0-471-27020-2 Copyright ©John Wiley & Sons, Inc.
95
96 BAYESIAN INFERENCE FOR DISCRETE RANDOM VARIABLES
Table 6.1 The Bayesian universe
(xi,yi) (xi,y2) (xb y,) (xi ,yj)

(xi,yi) (xi,y2) (xi, yj ) (xi,yj)

(xi ,yi) (xi,y2) (xi >yj) (xi ,yj)
parameter value is unobserved. Probabilities are defined at all points in the Bayesian universe.
We will change our notation slightly. We will use f () to denote a probability distribution (conditional or unconditional) that contains the observable random variable
Y, and g() to denote a probability distribution (conditional or unconditional) that only contains the (unobserved) parameter random variable X. This clarifies the distinction between Y, the random variable that we will observe, and X, the unobserved parameter random variable that we want to make our inference about. Each of the joint probabilities in the Bayesian universe is found using the multiplication rule
f (xi,Vj) = g(xi) x f (y, |Xi).
The marginal distribution of Y is found by summing the columns. We show the joint and marginal probability function in Table 6.2. Note that this is similar to how we presented the joint and marginal distribution for two discrete random variables in the previous chapter (Table 5.3). However, now we have moved the marginal probability function of X over to the left-hand side and call it the prior probability function of the parameter X to indicate it is known to us at the beginning. We also note the changed notation.
When we observe Y = yj, the reduced Bayesian universe is the set of ordered pairs in the jth column. This is shown in Table 6.3. The posterior probability function of X given Y = yj is given by
. N g(xi) x f (y, |x,')
g(Xi|yj) = g i( )fXyj( i| ) .
Zi=i g(xi) x f (y,|xi)
Let us look at the parts of the formula.
• The prior distribution of the discrete random variable X is given by the prior probability function g(xi), for i = l,...,n. This is what we believe the probability of each xi to be before we look at the data. It must come from prior experience, not from the current data.
BAYESIAN INFERENCE FOR DISCRETE RANDOM VARIABLES 97
Table 6.2 The joint and marginal distributions of X and Y
prior yi . . уз . . yj
xi g(xi) f (xi,yi) . . f (xi,yj) . . f(xi ,yj)
xi g(xi) f (xi,yi) . . f (xi,yj) . . f (xi,yj)
xj g(xi) f (xi ,yi) . . f (xi ,yj) . . f(xi ,yj)
f(yi) . . f(уз) . . f (yj)
Table 6.3 The reduced Bayesian universe given Y = yj
. . (xl,yj) . .
(xi,yj)
(xi ,Vj)
Previous << 1 .. 33 34 35 36 37 38 < 39 > 40 41 42 43 44 45 .. 126 >> Next