Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Detection, Estimation modulation theory part 1 - Vantress H.

Vantress H. Detection, Estimation modulation theory part 1 - Wiley & sons , 2001. - 710 p.
ISBN 0-471-09517-6
Download (direct link): sonsdetectionestimati2001.pdf
Previous << 1 .. 126 127 128 129 130 131 < 132 > 133 134 135 136 137 138 .. 254 >> Next

2. The second is to study the performance of the resulting receiver structures to see whether problems appear that did not occur in the scalar case. We discuss only a few simple examples in this section. In Chapter II.5 we return to the multidimensional problem and investigate some of the interesting phenomena.
t In the scalar case we wrote the signal energy separately and worked with normalized waveforms. In the vector case this complicates the notation needlessly, and we use unnormalized waveforms.
Formulation 367
4.5.1 Formulation
We assume that s(/) is a known vector signal. The additive noise n(0 is a sample function from an Af-dimensional Gaussian random process. We assume that it contains a white noise component:
n(0 = w(0 + nc(0, (425)
where
?[w(OwT()] = I 3( - ). (426a)
a more general case is,
E[yv(t) wr(M)] = N 8(t - ). (426b)
The matrix N contains only numbers. We assume that it is positive-
definite. Physically this means that all components of () or any linear transformation of (t) will contain a white noise component. The general case is done in Problem 4.5.2. We consider the case described by (426a) in the text. The covariance function matrix of the colored noise is
E[nc(t)nJ(u)] c(t,u). (427)
We assume that each element in Kc(f, u) is square-integrable and that the white and colored components are independent. Using (425-427), we have
(/, ) = I b(t - ) + c(t, u). (428)
To construct the likelihood ratio we proceed as in the scalar case.
Under hypothesis Hx
rTf
, ' rr(0 ,(0 dt
JTi
= P sr(0 ,(0 dt + \TfnT{t)Ut)dt
JTi JTi
= ^ + ,. (429)
Notice that all of the coefficients are scalars. Thus (180) is directly applicable:
(430)
Substituting (429) into (430), we have
In [()] = JJrr(?) 2 ^ S(M) dt du
368 4.5 Multiple Channels Defining
n (t , - V Ļ)
QmA *
ҳ < t, < Tfi
(432)
we have
i = 1
Tf
In A[r(0] = JJrT(0 M) S(M) dt du
ҳ
Tf
\ J J s T(t) Qn (t, u) s(w) dt du. (433)
ҳ
Using the vector form of Mercers theorem (2.253) and (432), we observe that
r_ ^
(435)
By analogy with the scalar case we write
Qn(f, u) = jfQ4Kt u) - h0(U u)\
and show that h0(t9 u) can be represented by a convergent series. The details are in Problems 4.5.1. As in the scalar case, we simplify (433) by defining,
(436)
(a)
Observe
r(t) (-) Threshold Hi or Hq
device
(b)
Fig. 4.76 Vector receivers: (a) matrix correlator; (b) matrix matched filter.
Application 369
The optimum receiver is just a vector correlator or vector matched filter, as shown in Fig. 4.76. The double lines indicate vector operations and the symbol denotes the dot product of the two input vectors. We can show that the performance index is
Tf
d2 = jjsT(t)Qn(t,u)s(u)dtdu
ҳ
Tf
= J sr(0 g(0 dt. (437)
ҳ
4.5.2 Application
Consider a simple example. Example.
s(0 =
vElSl(t) '
VE2 S2(t)
*(0.
0 < t < T,
where the st(t) are orthonormal.
Assume that the channel noises are independent and white:
^
2
?[w(Owt(h)] =
Then
g(0
No
2

2V17
No
2 VTM L N0
0
No 2 J
S(/ - m).
^i(/>
(438)
(439)
(440)
The resulting receiver is the vector correlator shown in Fig. 4.77 and the performance index is
d2
(441)
This receiver is commonly called a maximal ratio combiner [65] because the inputs are weighted to maximize the output signal-to-noise ratio. The appropriate combiners for colored noise are developed in the problems.
Previous << 1 .. 126 127 128 129 130 131 < 132 > 133 134 135 136 137 138 .. 254 >> Next