# Detection, Estimation modulation theory part 1 - Vantress H.

ISBN 0-471-09517-6

**Download**(direct link)

**:**

**40**> 41 42 43 44 45 46 .. 254 >> Next

We discuss the binary hypothesis testing version of the general Gaussian problem in detail in the text. The Af-hypothesis and the estimation problems are developed in the problems. The basic model for the binary detection problem is straightforward. We assume that the observation space is W-dimensional. Points in the space are denoted by the N-dimen-sional vector (or column matrix) r:

r =

(319)

Under the first hypothesis Hx we assume that ã is a Gaussian random vector, which is completely specified by its mean vector and covariance matrix. We denote these quantities as

-Ùãõ\ l#l)“ ~mxl

ÅÛÍã) Ä mi2 A m1

-E(rN |tfa)_ ~m1N_

?[ã|ßõ] =

The covariance matrix is

Êã A ?[(r - Ø³Õãã - 41^)1 tfx] ³Êö iK12 iK13 ¦ ¦ ¦ lK2i \K22 ' ¦

-lK-m

We define the inverse of Ki as Qx

Qi À êã1

Q.K, = KM = I,

(320)

³ Ki

³ A'v

(321)

(322)

(323)

98 2.6 The General Gaussian Problem

where I is the identity matrix (ones on the diagonal and zeroes elsewhere). Using (320), (321), (322), and (318), we may write the probability density of ã on #b

prlHl(ÂÄ) = ðòã^²Ê,!1/’]-1 exp [-mT ~ m1r)Q1(R - mO]. (324)

Going through a similar set of definitions for tf0, we obtain the probability density

/>r|„0(R|tf0) = [(2*)«2|Ko|*]-i exp [-i(RT - m0r)Qo(R - øî)]. (325) Using the definition in (13), the likelihood ratio test follows easily:

*ÏÏ ë Ðãø^Þ ¹'ãõð [-KR7 - m/K^R - in,)] Ù ( } - ðòØî(ÙÍî) |KX|* exp [-KRr - m/)Q0(R - Øî)] „< 7?'

(326)

Taking logarithms, we obtain

¹T ~ W) Qo(R - Øî) - i(Rr - in/) Qj(R - mi)

I³ In >7 + ³ In |Ki| - ³ In |K0| 4 y*.

(327)

We see that the test consists of finding the difference between two quadratic forms. The result in (327) is basic to many of our later discussions. For this reason we treat various cases of the general Gaussian problem in some detail. We begin with the simplest.

2.6.1 Equal Covariance Matrices

The first special case of interest is the one in which the covariance matrices on the two hypotheses are equal,

K, = K0 A K, (328)

but the means are different.

Denote the inverse as Q:

Q = K 1. (329)

Substituting into (327), multiplying the matrices, canceling common terms, and using the symmetry of Q, we have

(m/ - m0T)QR ^ In 7) + -Km/Qiih - m/Qmo) é ó'*. (330)

Íî

We can simplify this expression by defining a vector corresponding to the difference in the mean value vectors on the two hypotheses:

Ëø À Ø³ _ ni0. (331)

Then (327) becomes

Equal Covariance Matrices 99

or, equivalently,

/(R) A RTQ Äø> /*.

Ho

(332)

(333)

The quantity on the left is a scalar Gaussian random variable, for it was obtained by a linear transformation of jointly Gaussian random variables. Therefore, as we discussed in Example 1 on pp. 36-38, we can completely characterize the performance of the test by the quantity d2. In that example, we defined d as the distance between the means on the two hypothesis when the variance was normalized to equal one. An identical definition is,

ä [E(l\Íã) - E{I\H0)f é ~ Var (/|#0) '

Substituting (320) into the definition of /, we have E(l\Hx) = Äù^Ø³

and

(335)

(336)

E(l I #0) = AmTQm0.

Using (332), (333), and (336) we have

Var [/|#0] = ?{[AmTQ(R - m0)][(RT - m0T)Q Am]}. (337)

Using (321) to evaluate the expectation and then (323), we have

Var [/|#0] = AmTQ Am. (338)

Substituting (335), (336), and (338) into (334), we obtain

d2 = AmTQ Am

(339)

Thus the performance for the equal covariance Gaussian case is completely determined by the quadratic form in (339). We now interpret it for some cases of interest.

Case 1. Independent Components with Equal Variance. Each r{ has the same variance o2 and is statistically independent. Thus

and

Ê = a21

(340)

(341)

100 2.6 The General Gaussian Problem

Substituting (341) into (339), we obtain

d2 = Äï³ò Ä= ² Äò = —î Am7 Äø = Ä; ²Äø²2

(342)

or

d =

|Äò|

(343)

We see that d corresponds to the distance between the two mean-value vectors divided by the standard deviation of R,. This is shown in Fig. 2.32. In (332) we see that

/ = -4 AmTR.

(344)

Thus the sufficient statistic is just the dot (or scalar) product of the observed vector R and the mean difference vector Äò.

Case 2. Independent Components with Unequal Variances. Here the rt are

statistically independent but have unequal variances. Thus

**40**> 41 42 43 44 45 46 .. 254 >> Next