# Detection, Estimation modulation theory part 1 - Vantress H.

ISBN 0-471-09517-6

**Download**(direct link)

**:**

**168**> 169 170 171 172 173 174 .. 254 >> Next

Rs = I J~Kt, t) dt. (164)

The kernel J “\t, x) is the inverse of the information matrix kernel 3{t, x) and is defined by the matrix integral equation

J - 4*. *) + du JJ' dz J - \t, u){E[D(u, a(«)) Q Ju, z) DT{z, a(z))]}Ka(z, x)

T> T‘ = Ka(/, x), Tt<t,x< Tf. (165)

The derivation of (164) and (165) is quite tedious and does not add any insight to the problem. Therefore we omit it. The details are contained in [11].

As a final topic in our current discussion of multiple waveform estimation we develop an interesting interpretation of nonlinear estimation in the presence of noise that contains both a colored component and a white component.

5.4.5 Colored Noise Estimation

Consider the following problem:

r(t) = s(t, a(t)) + w(0 + nc(t), Ò³ < t < Tf. (166)

Here w(0 is a white noise component (N0j2, spectral height) and nc(t) is an independent colored noise component with covariance function Kc(t, u). Then

Kn(t, u) = y Kt ~u) + Kc(t, u). (167)

The MAP estimate of a{t) can be found from (31), (32), and (33)

m = Ka(t, è) ä5(ù^- [rs(u) - g(M)] du, Ti<t< Tf, (168)

where

r(0 - s(t, d(t)) = f ’ Kn(t, u)[rg(u) - g(w)] du, Ò³ < t < Tt. (169) Substituting (167) into (169), we obtain r(t) - s(t, d(t)) = ^ 8(t - u) + Kc(t, t/)| [rg(u) - g(«)] du. (170)

Colored Noise Estimation 455

We now want to demonstrate that the same estimate, a(t), is obtained if we estimate a(t) and nc{t) jointly. In this case

Because we are including the colored noise in the message vector, the only additive noise is the white component

Looking at (168), (170), and (175), we see that a{t) will be the same in both cases if

but (177) is identical to (176).

This leads to the following conclusion. Whenever there are independent white and colored noise components, we may always consider the colored noise as a message and jointly estimate it. The reason for this result is that the message and colored noise are independent and the noise enters into r(t) in a linear manner. Thus the N x 1 vector white noise problem includes all scalar colored noise problems in which there is a white noise component.

Before summarizing our results in this chapter we discuss the problem of estimating nonrandom waveforms briefly.

(171)

and

(172)

Kn(t, è) = Ù S(t - u).

(173)

To use (160) we need the derivative matrix

'8s(t, a(t))~ D(t, a(0) = da(0

1

(174)

Substituting into (160), we obtain two scalar equations,

= JT ² Ka{t’ U) ^(u)^ [r(M) “ m) " Ac{u)] du’

Ã‘ 0 Tt < t < Th (175)

4(0 = f / òã U)H“) ~ s(u’ «(«)) - «ñ(è)1 du, Ti< t< Tf.

I? w0 [r{u) ~m) ~ ^)][t s(? ~u) + Kc{t’ u)\du

= r(t) — s(t, a(t))\ (177)

456 5.5 Nonrandom Waveform Estimation

5.5 NONRANDOM WAVEFORM ESTIMATION

It is sometimes unrealistic to consider the signal that we are trying to estimate as a random waveform. For example, we may know that each time a particular event occurs the transmitted message will have certain distinctive features. If the message is modeled as a sample function of a random process then, in the processing of designing the optimum receiver, we may average out the features that are important. Situations of this type arise in sonar and seismic classification problems. Here it is more useful to model the message as an unknown, but nonrandom, waveform. To design the optimum processor we extend the maximum-likelihood estimation procedure for nonrandom variables to the waveform case. An appropriate model for the received signal is

r(t) = s(t, a(t)) + n(t), Tt < t < 7}, (178)

where n(t) is a zero-mean Gaussian process.

To find the maximum-likelihood estimate, we write the In likelihood function and then choose the waveform a(t) which maximizes it. The In likelihood function is the limit of (12b) as K-> oo.

lnA(a(0) = f ’ dt f ’ du s(t, a(t)) Qn(t, u)[r(u) - ³ s(u, a(u))], (179)

JTi jTi

where Qn(t, è) is the inverse kernel of the noise. For arbitrary s(t9 a(t)) the minimization of In Ë(ÿ(ã)) is difficult. Fortunately, in the case of most interest to us the procedure is straightforward. This is the case in which the range of the function s(t, a(t)) includes all possible values of r(t). An important example in which this is true is

**168**> 169 170 171 172 173 174 .. 254 >> Next