in black and white
Main menu
Share a book About us Home
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics

Statistical analysis of mixture distribution - Smith A.F.M

Smith A.F.M Statistical analysis of mixture distribution - Wiley publishing , 1985. - 130 p.
ISBN 0-470-90763-4
Download (direct link): statistianalysisoffinite1985.pdf
Previous << 1 .. 71 72 73 74 75 76 < 77 > 78 79 80 81 82 83 .. 103 >> Next

This problem has been treated by a number of authors using variants of the various approximation procedures which we reviewed in case A. Decision-directed schemes have been studied by Glaser (1961), Scudder (1965), Young and Farjo (1972), Young and Calvert (1974), and Farjo and Young (1976); probabilistic teacher schemes have been studied by Agrawala (1970) and Cooper (1975);
and probabilistic editor schemes have been studied by Athans, Whiting, and Gruber (1977).
We shall begin our detailed discussion by considering the formal Bayesian solution when I) is assigned a normal prior density. Specifically, if p(0), the a priori density for 0, is taken to be normal with mean 0O and variance t2, which we shall denote here by N((fx ~60, t 2), then it is straightforward to verify, using Bayes theorem, that the a posteriori density for 0, given x,, can be written in the form
p(x|0) = nj\(x\0) + nj2(x).
Sequential problems and procedures
is the probability, having observed x,, that the first observation is a signal (i = I) or noise (i = 2), respectively, and <5fl is the usual Kroenecker delta. Equation (6.3.10) therefore has the form of a weighted average, with appropriate weights, of t^ie two forms of a posteriori density (updating and not updating p(0), respectively) that would be used if the true origin of x, (signal or noise) were known. Repeating the Bayes calculations for p(0|x,,x2), we would obtain, in place of (6.3.10), a weighted average of four densities (corresponding to the four possible sequences of signal and noise), and the number of component densities increases to 2" when we condition on xx„.
In order to avoid this computational and storage problem, while trying to keep close to the spirit of the Bayesian solution. Smith and Makov (1981) proposed the following quasi-Bayes approximation. They first note that w,(x,) involves the integration of /f(xJ0) and p(x,|0) with respect to p{0). Instead of using the resulting integrated forms, they propose using fi(Xi\60) and p(x,|0o), simply conditioning on the (prior) mean of p(0). They therefore replace w,(x,) by
The second part of the approximation consists in replacing (6.3.10) by a single normal distribution. Noting that the two component densities, written in terms of <5n, correspond to forms that would arise from supervised learning, and noting, also, that treating <5,-1 as an indicator random variable, its expected value, given x,, is vv!(xj), the quasi-Bayes procedure approximates (6.3.10) by
p(0|xj)N[0;t_20o + ^1(X|,0o)xnT 2 + W|(x,, 0n)]- (6.3.13)
Subsequent updating proceeds in the same way, so that, given the reproductive property of the normal distribution, we obtain
p{0 |x,....xn)^N
n n
0o+ Z ™Axj'0j-i)x>.t'2 + ’
j= 1 -I
where vv1(x,0J_1)= p[Xj^ and
T~2 + ^ VV^X,,^-,) 1= i
x 20o+ Z
the mean of the approximating normal a posteriori density given x, x,.
Equations (6.3.15) and (6.3.16) provide the basis for the proposed (QB) procedure
for successive estimates of 0. i
It is easily seen from (6.3.16) that the proposed estimates satisfy the genera
recursive relation
0„+i =6„-dny(6„,x„+l) (6-3-,7)
Statistical analysis of finite mixture distribute
dn =
n + 1
T -+ I H'l (x,, (?;_ 1 )
i= 1
- 1
x„ +, ) = (0H - x„ + , )v?’,(xn + ,, 0„).
Noting that, for large n.
d„ ^ [r 2 + (» + 1)71,]
71, /1
we see that (6.3.17) essentially has the general form of (6.3.1), with the particular choice of gain function
GqbU) — 7T, l,
whereas the fully efficient recursion would require
*n\{x - z)2f\{x\z) p{x I z)
a gain function which is less attractive computationally.
Theorem 6.3.2
If 2l(l1)>nl and the recursion defined by (6.3.17) is truncated into the region ( — M. M)nO’ then
Iif>urc 6.3.1 Efficiency of the quasi-Bayes procedure. Reproduced with permission Irom Smith and Makov (1981). Copyright © 1981 IEEE
Sequential problems and procedures
(a) n{l2(0„-0) is asymptotically normally distributed, with zero mean and variance
^oa=n;2l{0)/l2n; ll(0)~ 1]; (6.3.22)
(b) the relative efficiency is given by
^qb _2nl n\
vOPi~ m~ U(0)?' { ]
Proof: Assumptions AI to A4 are satisfied for finite 0. The result follows immediately from Theorem 6.3.1 and Corollary 6.3.2, with Gon(z) = e = 7t, '.
Figure 6.3.1, taken from Smith and Makov (1981), shows the behaviour of (6.3.23) as a function of rc, for some selected values of 0 (recall that (6.3.7) only applies if 21(0) > 7r,). The efficiency of the simple quasi-Bayes recursion is seen to be reasonable, even for 7t, as low as 0.65, and it is uniformly high for values of n, greater than 0.75.
6.3.3 A quasi-Bayes sequential procedure for the contaminated normal distribution
We shall consider the case of the contaminated normal distribution (discussed earlier in Example 2.2.1 and Section 4.4), where each of the sequence of observations to be processed may come from either a ‘good’ run or a ‘bad’ run. In particular, we shall assume that a good observation has a normal distribution with unknown mean 0 and unit variance, whose density is denoted by /,(x|0), and that a bad observation has a normal distribution with the same mean 0, but with an inflated (known) variance A2( > I), whose density is denoted by f2(x\0). The mixing probabilities for the good and bad components are assumed constant and known and are denoted by rr, and n2( = 1 — nt).
Previous << 1 .. 71 72 73 74 75 76 < 77 > 78 79 80 81 82 83 .. 103 >> Next