Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Detection, Estimation modulation theory part 1 - Vantress H.

Vantress H. Detection, Estimation modulation theory part 1 - Wiley & sons , 2001. - 710 p.
ISBN 0-471-09517-6
Download (direct link): sonsdetectionestimati2001.pdf
Previous << 1 .. 192 193 194 195 196 197 < 198 > 199 200 201 202 203 204 .. 254 >> Next

where R(0 is positive-definite.
In general, x(t) is an -dimensional vector and the channel is m-dimen-sional so that the modulation matrix is an m x n matrix. We assume that the channel noise w(f) and the white process u(f) which generates the message are uncorrelated.
With these two modifications our model is sufficiently general to include most cases of interest. The next step is to derive a differential equation for the optimum estimate. Before doing that we summarize the important relations.
C\(t)
wi(t)
Fig. 6.35 A simple diversity system.
538 6.3 Kalman-Bucy Filters
Summary of Model
All processes are assumed to be generated by passing white noise through a linear time-varying system. The processes are described by the vector-differential equation
where
Mt)
dt
= F(0x(0 + G(0 u(<),
?[u(0 ur(r)] = Q 3(f - r).
(302)
(303)
and x(t0) is specified either as a deterministic vector or as a random vector with known second-moment statistics.
The solution to (302) may be written in terms of a transition matrix:
x(0 = (;, to) x(t0) + f 4*/, r) G(t) u(t) dr. (304)
Jto
The output process y(t) is obtained by a linear transformation of the state vector. It is observed after being corrupted by additive white noise. The received signal r(f) is described by the equation
r(0 = C(0x(0 + w(0.
(305)
The measurement noise is white and is described by a covariance matrix:
2s[w(0 wr(w)] = R(0 8(t r).
(306)
Up to this point we have discussed only the second-moment properties of the random processes generated by driving linear dynamic systems with white noise. Clearly, if u(t) and w(t) are jointly Gaussian vector processes and if x(/0) is a statistically independent Gaussian random vector, then the Gaussian assumption on p. 471 will hold. (The independence of x(f0) is only convenient, not necessary.)
The next step is to show how we can modify the optimum linear filtering results we previously obtained to take advantage of this method of representation.
6.3.2 Derivation of Estimator Equations
In this section we want to derive a differential equation whose solution is the minimum mean-square estimate of the message (or messages). We recall that the MMSE estimate of a vector x(t) is a vector x(t) whose components xt(t) are chosen so that the mean-square error in estimating each
Derivation of Estimator Equations 539
component is minimized. In other words, E[(xi(t) xm, n is minimized. This implies that the sum of the mean-square errors, E{[xT(t) xT(r)][x(0 - x(t)]} is also minimized. The derivation is straightforward but somewhat lengthy. It consists of four parts.
1. Starting with the vector Wiener-Hopf equation (Property -V) for realizable estimation, we derive a differential equation in t, with r as a parameter, that the optimum filter h0(t, r) must satisfy. This is (317).
2. Because the optimum estimate () is obtained by passing the received signal into the optimum filter, (317) leads to a differential equation that the optimum estimate must satisfy. This is (320). It turns out that all the coefficients in this equation are known except h0(t, t).
3. The next step is to find an expression for h0(t, t). Property 4B-V
expresses h0(t, t) in terms of the error matrix ?P(0- Thus we can equally
well find an expression for %P(t). To do this we first find a differential equation for the error x6(/). This is (325).
4. Finally, because
Sp(0 ?[(001, (307)
we can use (325) to find a matrix differential equation that %P(t) must satisfy. This is (330). We now carry out these four steps in detail.
Step 1. We start with the integral equation obtained for the optimum finite time point estimator [Property 3A-V, (52)]. We are estimating the entire vector x(t)
Kx(f, a) Cr(<x) = h0(t, t) Kr(r, a) dr, Tt < a < t, (308)
J Ti
where
Kr(r, a) = C(r) Kx(t, a)Cr(a) + R(r) 8(r - a). (309)
Differentiating both sides with respect to t, we have
8Kft a) Ct(<t) = K{t, t) t(t, a)
+ J Kr(r, a) dr, T{< a <t. (310)
First we consider the first term on the right-hand side of (310). For a < t we see from (309) that
Kr(r, a) = (0[(, )()], a <t. (311)
The term inside the bracket is just the left-hand side of (308). Therefore,
bo(t, t) T(t, cr) = h0(t, t) C(t) h0(t, r) Kr(r, a) dr, a < t. (312)
JTi
540 6.3 Kalman-Bucy Filters
Now consider the first term on the left-hand side of (310),
(313)
Using (302), we have
8Kft a) = F(0 Kx(f, a) + G(0 0, a), (314)
Previous << 1 .. 192 193 194 195 196 197 < 198 > 199 200 201 202 203 204 .. 254 >> Next