Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Detection, Estimation modulation theory part 1 - Vantress H.

Vantress H. Detection, Estimation modulation theory part 1 - Wiley & sons , 2001. - 710 p.
ISBN 0-471-09517-6
Download (direct link): sonsdetectionestimati2001.pdf
Previous << 1 .. 186 187 188 189 190 191 < 192 > 193 194 195 196 197 198 .. 254 >> Next

/ = 1 n
= - 2 Pk-lxk(t) + b0u(t).
(191)
(191)
Denoting the set of xt(t) by a column matrix, we see that the following first-order -dimensional vector equation is equivalent to the wth-order scalar equation.
dx(t)
dt
\(t) = Fx(/) + G u(t),
(192)
where
and
" 0 1
0 i 0
0
F = 0
0 1
-~Po -Pl -Pi -Pa -Pn-1.
(193)
G =
0
0
-bo_
(194)
The vector x(t) is called the state vector for this linear system and (192) is called the state equation of the system. Note that the state vector x(/) we selected is not the only choice. Any nonsingular linear transformation of x(r) gives another state vector. The output y(t) is related to the state vector by the equation
X0 = Cx(0, (195)
Differential Equation Representation of Linear Systems 521
where is a 1 x n matrix
= [1 ! 0 ! 0 ! 0 0]. (196)
Equation (195) is called the output equation of the system. The two equations (192) and (195) completely characterize the system.
Just as in the first example we can generate both nonstationary and stationary random processes using the system described by (192) and (195). For stationary processes it is clear (190) that we can generate any process with a rational spectrum in the form of
Sy{cu) = d2na>2n + </2-2>2n-2 + + d0 (197)
by letting u(t) be a white noise process and t0 = oo. In this case the state vector x(0 is a sample function from a vector random process and y(t) is one component of this process.
The next more general differential equation is
y{n\t) + />_! _1>(0 + + p0y{t)
= + + b0u(t). (198)
The first step is to find an analog computer-type realization that corresponds to this differential equation. We illustrate one possible technique by looking at a simple example.
Example 2A. Consider the case in which n 2 and the initial conditions are zero. Then (198) is
+ Pi AO + Ӳ = u(t) + b0 u(t). (199)
Our first observation is that we want to avoid actually differentiating u(t) because in many cases of interest it is a white noise process. Comparing the order of the highest derivatives on the two sides of (199), we see that this is possible. An easy approach is to assume that u(t) exists as part of the input to the first integrator in Fig. 6.28 and
examine the consequences. To do this we rearrange terms as shown in (200):
[0 bi m(01 + Pi AO + Po AO = bo u(t). (200)
The result is shown in Fig. 6.28. Defining the state variables as the integrator outputs, we obtain
*i(0 = Ӳ (201a)
and
*a(0 = AO ~ bx u(t). (201 b)
Using (200) and (201), we have
ii(0 = *2(0 + bi u(t) (202a)
MO = -Po *i(0 - Pi (*2(0 + bx (0) + bo u(t)
= ~Po *i(0 - Pi *2(0 + (b0 - bxpx) u{t). (202b)
We can write (202) as a vector state equation by defining
f
L-/>o -p J
(203a)
522 6.3 Kalman-Bucy Filters u(t) u(t)
and
Then
The output equation is
G-f 1.
Lb0 - pibd
(203b)
x(t) = Fx(/) + Gu(t). (204a)
y(t) = [1 I 0]x(O x(0. (204b)
Equations 204a and 204b plus the initial condition x(r0) = 0 characterize the system.
It is straightforward to extend this particular technique to the wth order
(see Problem 6.3.1). We refer to it as canonical realization No. 1. Our
choice of state variables was somewhat arbitrary. To demonstrate this, we reconsider Example 2A and develop a different state representation.
Example 2. Once again
+ Pi y(t) + Po y(t) = bx u(t) + bo u(t). (205)
As a first step we draw the two integrators and the two paths caused by bx and b0. This partial system is shown in Fig. 6.29a. We now want to introduce feedback paths and identify state variables in such a way that the elements in F and G will be one of the coefficients in the original differential equation, unity, or zero. Looking at Fig. 6.29, we see that an easy way to do this is to feed back a weighted version of *i(0 (= y(t)) into each summing point as shown in Fig. 6.29b. The equations for the state variables are
Previous << 1 .. 186 187 188 189 190 191 < 192 > 193 194 195 196 197 198 .. 254 >> Next