Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Kalman Filtering Neural Networks - Haykin S.

Haykin S. Kalman Filtering Neural Networks - Wiley publishing , 2001. - 202 p.
ISBNs: 0-471-36998-5
Download (direct link): kalmanfilteringneuralnetworks2001.pdf
Previous << 1 .. 2 3 4 < 5 > 6 7 8 9 10 11 .. 72 >> Next

E[(xk - xk)yf ]-0; (1-13)
it follows that
E[(xk - xk)yT]-0; (1-14)
1.3 KALMAN FILTER 7
where yT is an estimate of yk given the previous measurement y1, yk, ..., yk-1. Define the innovations process
yk — yk - yk - (1-15)
The innovation process represents a measure of the ‘‘new’’ information
contained in yk; it may also be expressed as
yk — yk - Hkx-
— Hk xk + vk - Hkx-
— Hkx- + vk - (1-16)
Hence, subtracting Eq. (1.14) from (1.13) and then using the definition of Eq. (1.15), we may write
E[(xk - xk)yT] —0- (1-17)
Using Eqs. (1.3) and (1.12), we may express the state-error vector xk - xk as
xk - xk — x- - Gk (Hkx- + vk)
— (I - Gk Hk )x- - Gkvk - (1-18)
Hence, substituting Eqs. (1.16) and (1.18) into (1.17), we get
E[{(I - GkHk)x- - Gkvk}(Hkx- + vk)] — 0- (1-19)
Since the measurement noise vk is independent of the state xk and
therefore the error x-, the expectation of Eq. (1.19) reduces to
(I - GkHk)E\xkxT-]HT - GkE[vkvT] — 0- (1 -20)
Define the a priori covariance matrix
Pt = EKxk - x/t)(xk - x/t)T]
= E[x- • x[-].
(1.21)
8 1 KALMAN FILTERS
Then, invoking the covariance definitions of Eqs. (1.4) and (1.21), we may rewrite Eq. (1.20) as
(I - GkHk)P-HT - GkRk - 0-
Solving this equation for Gk, we get the desired formula
Gk - P-Hf [HkP-Hf + Rk]-1; (1-22)
where the symbol [-]-1 denotes the inverse of the matrix inside the square brackets. Equation (1.22) is the desired formula for computing the Kalman gain Gk, which is defined in terms of the a priori covariance matrix P-.
To complete the recursive estimation procedure, we consider the error covariance propagation, which describes the effects of time on the covariance matrices of estimation errors. This propagation involves two stages of computation:
1. The a priori covariance matrix P- at time k is defined by Eq. (1.21). Given P-, compute the a posteriori covariance matrix Pk, which, at time k, is defined by
Pk - E[xkxf ]
- E[(xk - xk)(xk - xk)T]- (1 -23)
2. Given the ‘‘old’’ a posteriori covariance matrix, Pk-1, compute the ‘‘updated’’ a priori covariance matrix P-.
To proceed with stage 1, we substitute Eq. (1.18) into (1.23) and note that the noise process vk is independent of the a priori estimation error x-. We thus obtain1
Pk - (I - GkHk)E[x-xf-](I - GkHk)T + GkE[Vkvf]Gf
-(I - Gk Hk)P-(I - Gk Hk)T + Gk Rk Gf - (1 -24)
Equation (1.24) is referred to as the ‘‘ Joseph’’ version of the covariance update equation
[5].
1.3 KALMAN FILTER 9
Expanding terms in Eq. (1.24) and then using Eq. (1.kk), we may reformulate the dependence of the a posteriori covariance matrix Pk on the a priori covariance matrix Pk- in the simplified form
Pk — (I - GkHk)P- - (I - GkHk)P-HTGT + GkRkGT
— (I - GkHk)P- - GkRkgT + GkRkgT
— (I - GkHk)P-- (1-25)
For the second stage of error covariance propagation, we first recognize that the a priori estimate of the state is defined in terms of the ‘‘old’’ a posteriori estimate as follows:
x- — Fk,k-1xk-1- (1-26)
We may therefore use Eqs. (1.1) and (1.26) to express the a priori estimation error in yet another form:
x- = xk - x— (Fk,k-ixk-i + wk-i)-(Fk,k-i:xk-i)
= Fk,k-i(xk-i - xk-i)+wk-i
— Fk,k-ixk-i + wk-i. (i.27)
Accordingly, using Eq. (1.27) in (1.21) and noting that the process noise wk is independent of xk-1, we get
P- — Fk, k-1 E[xk-1xT-1 ]fT, k-1 + E[wk-1wT-1]
— Fk,k-1Pk-1FT,k-1 + Qk-1, (1-28)
which defines the dependence of the a priori covariance matrix P- on the ‘‘ old’’ a posteriori covariance matrix Pk-1.
With Eqs. (1.26), (1.28), (1.22), (1.12), and (1.25) at hand, we may now summarize the recursive estimation of state as shown in Table 1.1. This table also includes the initialization. In the absence of any observed data at time k — 0, we may choose the initial estimate of the state as
x o — E[x0],
(i .29)
10 1 KALMAN FILTERS
Table 1.1 Summary of the Kalman filter
State-space model
xk+1 = Fk+1,k xk + wk ; yk = Hk xk + vk;
where wk and vk are independent, zero-mean, Gaussian noise processes of covariance matrices Qk and Rk, respectively.
Initialization: For k = 0, set
x o = E[xo];
Po = E [(xo - E[xo])(xo - E[xo])T].
Computation: For k = 1; 2;..., compute:
State estimate propagation
x - = Fk,k-1xk-1;
Error covariance propagation
P- = Fk,k-1Pk-1Fk,k-1 + Qk-1;
Kalman gain matrix
Gk = P-Hk [Hk P- Hk + Rk ]-1;
State estimate update
xk = x- + Gk( yk- Hkx-) ;
Error covariance update
Pk = ( - Gk Hk)P-.
and the initial value of the a posteriori covariance matrix as
Po = E[(xo - E[xo])(xo - E[xo])k]. (1.3o)
This choice for the initial conditions not only is intuitively satisfying but also has the advantage of yielding an unbiased estimate of the state xk.
1.4 DIVERGENCE PHENOMENON: SQUARE-ROOT FILTERING
The Kalman filter is prone to serious numerical difficulties that are well documented in the literature [6]. For example, the a posteriori covariance matrix Pk is defined as the difference between two matrices P- and
Previous << 1 .. 2 3 4 < 5 > 6 7 8 9 10 11 .. 72 >> Next