Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Detection, Estimation modulation theory part 1 - Vantress H.

Vantress H. Detection, Estimation modulation theory part 1 - Wiley & sons , 2001. - 710 p.
ISBN 0-471-09517-6
Download (direct link): sonsdetectionestimati2001.pdf
Previous << 1 .. 10 11 12 13 14 15 < 16 > 17 18 19 20 21 22 .. 254 >> Next

In A(R) ^ In rj.
(i6)
Ho
ͳó = m + = 1, 2,..., N,
H0:rt = = 1, 2,..N,
(17)
and
(18)
because the noise samples are Gaussian.
The probability density of rt under each hypothesis follows easily:
,1(.|1) = pnt(Ri - m) = ^=-exp (-(*2g2w)2)
(19)
and
and
Pr,iH0(Ri\H0) = ) = -^=r exp (-g)-
(20)
Threshold A(H) device Decision
(a)
Data
processor
Threshold lnA(R) device Decision
(R) ^ln V
(b)
Fig. 2.5 Likelihood ratio processors.
28 2.2 Simple Binary Hypothesis Tests
Fig. 2.6 Model for Example 1.
Because the are statistically independent, the joint probability density of the r{ (or, equivalently, of the vector r) is simply the product of the individual probability densities. Thus
( = 1 V 2tt a
and
/>r|H0(R|tfo) - a CXP ( 2a2)' Substituting into (13), we have
A 1 ( (Rt m)2\
.()
A l / R?\ '
After canceling common terms and taking the logarithm, we have
* m nt d Nm2
In A(R) a2 2 * 2 a2
A(R)
Thus the likelihood ratio test is
m
or, equivalently,
rn v Nm2 H' ,
2 Ri---------=-5- 7)
0 tT\ 2a2 hq
V i> >a 1 .
˳ ^ t; + -x-
1 2
(21)
(22)
(23)
(24)
(25)
(26)
We see that the processor simply adds the observations and compares them with a threshold.
Likelihood Ratio Tests 29
In this example the only way the data appear in the likelihood ratio test is in a sum. This is an example of a sufficient statistic, which we denote by /(R) (or simply / when the argument is obvious). It is just a function of the received data which has the property that A(R) can be written as a function of /. In other words, when making a decision, knowing the value of the sufficient statistic is just as good as knowing R. In Example 1, I is a linear function of the Ru A case in which this is not true is illustrated in Example 2.
Example 2. Several different physical situations lead to the mathematical model of interest in this example. The observations consist of a set of N values: ru r2, r3,..., rN. Under both hypotheses, the r{ are independent, identically distributed, zero-mean Gaussian random variables. Under #i each r( has a variance ax2. Under H0 each rt has a variance aQ2. Because the variables are independent, the joint density is simply the product of the individual densities. Therefore
z*IBl<Ritfx) = n vib;exp (-5?) (27)
and
Substituting (27) and (28) into (13) and taking the logarithm, we have
1/1 1 \ _ Hi
2 - -) 2 R? + Win- ^ In v. (29)
2 w <71 / tfi *1 Ho '
In this case the sufficient statistic is the sum of the squares of the observations
/(R) = 2 (30)
i = l
and an equivalent test for ax2 > a02 is
/(R) t 2/ai\ (in r, - N In A y. (31)
Ho <J\ Oo \
For ax2 < a02 the inequality is reversed because we are multiplying by a negative number:
/(R) | 2a/a--~i (n In - lm) A y'; W < a2). (32)
Hi &o CTi \ CTi /
These two examples have emphasized Gaussian variables. In the next example we consider a different type of distribution.
Example 3. The Poisson distribution of events is encountered frequently as a model of shot noise and other diverse phenomena (e.g., [1] or [2]). Each time the experiment is
conducted a certain number of events occur. Our observation is just this number
which ranges from 0 to oo and obeys a Poisson distribution on both hypotheses; that is,
Pr (n events) = e~mi, n = 0, 1,2 ...,/ = 0,1, (33)
where mi is the parameter that specifies the average number of events:
E(n) = {.
(34)
2.2 Simple Binary Hypothesis Tests
It is this parameter mt that is different in the two hypotheses. Rewriting (33) to emphasize this point, we have for the two Poisson distributions
This example illustrates how the likelihood ratio test which we originally wrote in terms of probability densities can be simply adapted to accommodate observations that are discrete random variables. We now return to our general discussion of Bayes tests.
Previous << 1 .. 10 11 12 13 14 15 < 16 > 17 18 19 20 21 22 .. 254 >> Next