Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Detection, Estimation modulation theory part 1 - Vantress H.

Vantress H. Detection, Estimation modulation theory part 1 - Wiley & sons , 2001. - 710 p.
ISBN 0-471-09517-6
Download (direct link): sonsdetectionestimati2001.pdf
Previous << 1 .. 77 78 79 80 81 82 < 83 > 84 85 86 87 88 89 .. 254 >> Next

Zr(w) = ZJco) + Zn(w), oo < CD < oo. (236)
j n(t) a(t) X r(t)
----------------------^
Fig. 3.27 System for estimation example.
t Further discussion of spectral decomposition is available in Gnedenko [27] or Bartlett ([28], Section 6-2).
Application of Spectral Decomposition 219
Assume that a(t) and n(t) are sample functions from uncorrelated zero-mean Gaussian random processes with spectral densities SJw) and Sn(cu), respectively. Because ZJaS) and Zn(w) are linear functionals of a Gaussian process, they also are Gaussian processes.
If we divide the frequency axis into a set of disjoint intervals, the increment variables will be independent. (See Fig. 3.28.) Now consider a particular interval (a> dw, ] whose length is dw. Denote the increment variables for this interval as dZr(co) and dZn(a>). Because of the statistical independence, we can estimate each increment variable, dZJw), separately, and because MAP and MMSE estimation commute over linear transformations it is equivalent to estimating a(t).
The a posteriori probability of dZa(), given that dZr((o) was received, is just
PdZa (o>) I dZr () [dZa(co) I dZr{oi) ]
f:M/ IIdZM-dZMl* 1 1 dZM\2 \ (im
- P\ 2 S(a>) da)/2n 2 Sa(<o) dco/2n)' y }
[This is simply (2-141) with N = 2 because dZr{w) is complex.]
Because the a posteriori density is Gaussian, the MAP and MMSE estimates coincide. The solution is easily found by completing the square and recognizing the conditional mean. This gives
dZ&{co) = dz?u>) = .- ,, dZr(a>). (238)
Therefore the minimum-mean square error estimate is obtained by passing r(t) through a linear filter,
0) = / (239)
Sa(<) + Sn(co)
We see that the Gaussian assumption and MMSE criterion lead to a linear filter. In the model in Section 3.4.5 we required linearity but did not assume Gaussianness. Clearly, the two filters should be identical. To verify this, we take the limit of the finite time interval result. For the special case of white noise we can modify the result in (154) to take in
account the complex eigenfunctions and the doubly infinite sum. The
result is,
ho(t, 0= 2, /2 m }' (240)
Using (177) and (178), we have
Um h0(t, u) = 2 Jo ^a(4+W2 CS " M) S (241) which corresponds to (239).
220 3.7 Vector Random Processes
Fig. 3.28 Integrated transforms of a{t) and r(t).
In most of our developments we consider a finite time interval and use the orthogonal series expansion of Section 3.3. Then, to include the infinite interval-stationary process case we use the asymptotic results of Section 3.4.6. This leads us heuristically to the correct answer for infinite time. A rigorous approach for the infinite interval would require the use of the integrated transform technique we have just developed.
Before summarizing the results in this chapter, we discuss briefly how the results of Section 3.3 can be extended to vector random processes.
3.7 VECTOR RANDOM PROCESSES
In many cases of practical importance we are concerned with more than one random process at the same time; for example, in the phased arrays used in radar systems the input at each element must be considered. Analogous problems are present in sonar arrays and seismic arrays in which the received signal has a number of components. In telemetry systems a number of messages are sent simultaneously.
In all of these cases it is convenient to work with a single vector random process x(t) whose components are the processes of interest. If there are N processes, xx{t), x2{t)- * * xN(t), we define x(/) as a column matrix,
X(0 a
X2{t)
(242)
Vector Eigenfunctions, Scalar Eigenvalues 221
"*i(0" 'm^ty
mx(0 A E x2(t) m2(t)
.xN(t). jnN(t)_
The dimension N may be finite or countably infinite. Just as in the single process case, the second moment properties are described by the process means and covariance functions. In addition, the cross-covariance functions between the various processes must be known. The mean value function is a vector
(243)
and the covariances may be described by an N x N matrix, Kx(f, u), whose elements are
Ku(t, ) . E{[xlt) - mlt)][xlu) - m/w)]}. (244)
We want to derive a series expansion for the vector random process x(t). There are several possible representations, but two seem particularly efficient. In the first method we use a set of vector functions as coordinate functions and have scalar coefficients. In the second method we use a set of scalar functions as coordinate functions and have vector coefficients. For the first method and finite N, the modification of the properties on pp. 180-181 is straightforward. For infinite N we must be more careful. A detailed derivation that is valid for infinite N is given in [24]. In the text we go through some of the details for finite N. In Chapter II-5 we use the infinite N result without proof. For the second method, additional restrictions are needed. Once again we consider zero-mean processes.
Previous << 1 .. 77 78 79 80 81 82 < 83 > 84 85 86 87 88 89 .. 254 >> Next