Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Statistical analysis of mixture distribution - Smith A.F.M

Smith A.F.M Statistical analysis of mixture distribution - Wiley publishing , 1985. - 130 p.
ISBN 0-470-90763-4
Download (direct link): statistianalysisoffinite1985.pdf
Previous << 1 .. 50 51 52 53 54 55 < 56 > 57 58 59 60 61 62 .. 103 >> Next

For cases (b) and (d) the same principle is used after a preliminary transformation has been made from p(x) to a curve which does have different
component modes; see later.
The difficulty in practice is to construct the test function from knowledge of just the curve p(x) and not its constitution. Use of px(x) directly from (4.7.7), for instance, is not a practical proposition!
Example 4.7.3 A general mixture of k components Suppose
p(x) = X njf(x ~ Pj> °j)> (4-7-8)
]= i
where /(x — p,<r) denotes a p.d.f., px,...,pk are distinct, and 0 < <r, ^ ••• ^ ok. Suppose also that the characteristic function of f{z,o) takes the known form
¹)]*,
where G(s) is a characteristic function. Then it is indeed possible to use
Px(*)= Z n/ix-PfVj-*) (4-7-9)
j= i
as a test function.
This follows because the characteristic functions from (4.7.8) and (4.7.9), G0(s) and GJs), are related by
G/(s) = [G(s)] _,lG0(s), for all s.
Thus
C0(s) = [G(s)]aGa(s), for all s,
and the right-hand side is a product of characteristic functions. As a result of the convolution nature of this relationship,
P(x) =
f(x-y,A)pA(y)dy, for all x. (4.7.10)
The relationship (4.7.10) does not involve any of the unknown parameters and identifies px(y) as the solution of a Fredholm integral equation of the first kind, for which numerical solution, at least, is possible (see Medgyessy, 1977, Chapter V).
Learning about the parameters of a mixture
Suitable experimentation with A (‘formant changing’ is Medgyessy's terminology) will reveal the number of components in the mixture.
Specific cases that conform to this pattern include mixtures of stablc-law densities.
In the case of a normal mixture we have to solve
CD
p(x) = (2tt/) 1/2 I exp [ - (x - y)2/2X-]px(y)dy.
00
Here there is an explicit solution, given by
*,<*>- i ^V'w,
r = 0 r
where pir)(x) denotes the rth derivative of p(x). Thus px(x) could be drawn quite easily, using suitable truncation of the series and numerical estimation of the derivatives (see Medgyessy, 1977, Chapter III, Section 1.1.1.1 and pp. 133 136, where the pioneering work of Sen, 1922, and Doetsch, 1928, 1936, is acknowledged).
Practical application is described by Gregor (1969), who fits a normal mixture to histogram data. He subtracts one component after another from the mixture, as they are identified, until the fit is satisfactory. Suppose it is proposed that
k
p(x) = X nj(f)(x\pj,aj).
The method of Example 4.7.3, using numerical calculation of Fourier transforms and their inverses, identifies the ‘narrowest’ component and its mean./t,, say. The values of 7t, and a, are then found by considering the areas of two small strips s,, s2 (Figure 4.7.3) round the mean, /<,.
Figure 4.7.3 Strips used in Gregor’s method
14' Statistical analysis of finite mixture distributions
It is assumed that the strips are narrow enough to ensure that there is little overlap with the other components. Analytical approximations of these areas in terms of 7r, and <r, arc equated to numerical approximations obtained from the data by Simpson's rule. Once these equations are solved for 7r, and ox, the
component
nx<t>{x\fix,ox)
is ‘subtracted’ from the data and the procedure repeated Further data-based examples are given in Sammon (1968).
4.7.4 Further examples
Sammon (1968) and Stanat (1968) both extend the method of Example 4.7.3 to the multivariate case. Stanat (1968) deals with multivariate normal and multivariate Bernoulli mixtures. The normal case is described briefly below.
Example 4.7.4 Mixture of multivariate normals Suppose
k
p(x) = ? nj<l)d{x\njtT,j),
J= i
where <pd denotes the d-dimensional multivariate normal density.
The method is based on decomposition of univariate projections of this density. The mixing weights, means, and variances can be obtained from the marginal densities and then the covariances {(?,),/} from the densities of {x, + x,}. Difficulties may arise because of the possible equalities of some of the mixing weights or of some of the other parameters across the different component distributions. However, Stanat shows how to find another single linear combination, vTx, decomposition of whose density resolves these difficulties.
Example 4.7.5 Mixture of bivariate normals
Tarter and Silvers (1975) base a slightly different method on orthogonal expansion density estimates. They show that the bivariate normal mixture density
k
p{x) = ? nj(f)2{x\Mj,I.j)
J= i
can be represented, within the unit square, as
p(x) = ? exp (27tir1 x), (4.7.11)
re/V
where N is the set of all 2-tuples of integers, i = — 1) and, approximately,
Learning about the parameters of a mixture ^
k
BT= X 71;exp( — 2n\Trfl, — 2n2rrlT),
J=1 1
where, throughout, n without a subscript denotes 3.14159.... (If necessary, transformations are made so that the data do lie in the unit square.) Given data in
the form of a random sample, x, x„, a practicable estimate of (4.7.11) can be
obtained by
(a) replacing Br by an estimate,
Previous << 1 .. 50 51 52 53 54 55 < 56 > 57 58 59 60 61 62 .. 103 >> Next