Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Detection, Estimation modulation theory part 1 - Vantress H.

Vantress H. Detection, Estimation modulation theory part 1 - Wiley & sons , 2001. - 710 p.
ISBN 0-471-09517-6
Download (direct link): sonsdetectionestimati2001.pdf
Previous << 1 .. 198 199 200 201 202 203 < 204 > 205 206 207 208 209 210 .. 254 >> Next

We assume that the message of interest is x2B(t). In Chapter II.2 we shall see the model in Fig. 6.49 and the resulting estimator play an important role in the FM problem. This is just a particular example of the general problem in which the message is subjected to a linear operation before transmission. There are two easy ways to solve this problem. One way is to observe that because we have already solved Example 5A we can use that result to obtain the answer. To use it we must express X2B(t) as a linear transformation of xlA(t) and x2A(t\ the state variables in Example 5A.
px2 B(t) = xlB(t) (381)
and
*iB(0 = (382)
if we require
4 a = P2Qb' (383)
Observing that
*2A(t) = xxA(t) + *1^(0, (384)
we obtain
P X2B(t) = XiA(t) = kxiA(t) + X2A(t). (385)
u(t) ^ 1 0
S + k *2B(t) s *1 bW
Fig. 6.49 Message generation, example 5B.
560 6.3 Kalman-Bucy Filters
Now minimum mean-square error filtering commutes over linear transformations. (The proof is identical to the proof of 2.237.) Therefore
X2B(t) = -j- [k xlA(t) + x2j(t)]- (386)
Observe that this is not equivalent to letting px2B(t) equal the derivative of xiA(t). Thus
*aB(0 * jj *iA(0- (387)
With these observations we may draw the optimum filter by modifying Fig. 6.48. The result is shown in Fig. 6.50. The error variance follows easily:
j52 f22B(0 = k2 (llA(t) - 2k ?12A(t) + $22A(t). (388)
Alternatively, if we had not solved Example 5A, we would approach the problem directly. We identify the message as one of the state variables. The appropriate matrices are
-c-'j- -a-
(389)
LU KJ UJ
and
Q = . (390)
Applications 561
The structure of the estimator is shown in Fig. 6.51. The variance equation is *11 (0 = 2J3<ti2(0 - (*
ۻ = 0 - ^ *n(0fi.(0, (391)
6a(0 = -2!) ~ J- 2(0 + qB.
/V
Even in the steady state, these equations appear difficult to solve analytically. In this particular case we are helped by having just solved Example 5A. Clearly, fn(/) must be the same in both cases, if we let qA = jS2qB. From (379)
> _ kNo
-
kN0K
~ 2
The other gain fi2,oo now follows easily
(392)
*.. - k-^f- (393)
Because we have assumed that the message of interest is x2(t), we can also easily calculate its error variance:
(3,4)
It is straightforward to verify that (388) and (394) give the same result and that the block diagrams in Figs. 6.50 and 6.51 have identical responses between r(t) and x2(t). The internal difference in the two systems developed from the two different state representations we chose.
Fig. 6.51 Optimum estimator: example 5B (form #2).
562 6,3 Kalman-Bucy Filters
Example 6. Now consider the same message process as in Example 1 but assume that the noise consists of the sum of a white noise and an uncorrelated colored noise,
n(t) = 7fc(0 + w(t) (395)
and
2 kcPc
sm =
2 + 2 (396)
and
As already discussed, we simply include nc(t) as a component in the state vector. Thus,
<39,>
>.[-; _j, ()
C, - [' ], 09)
C(t) = [1 1], (400)
2 0 1
- D , (401)
L 0 2
and
R(0 = -f- (402)
The gain matrix z(t) becomes
,( = ____
No
Previous << 1 .. 198 199 200 201 202 203 < 204 > 205 206 207 208 209 210 .. 254 >> Next