# Kalman Filtering Neural Networks - Haykin S.

ISBNs: 0-471-36998-5

**Download**(direct link)

**:**

**55**> 56 57 58 59 60 61 .. 72 >> Next

7.3 THE UNSCENTED KALMAN FILTER 231

(a) (b) (c)

Figure 7.3 Example of the UT for mean and covariance propagation: (a) actual; (b) first-order linearization (EFK); (c) UT.

given in Table 7.2. Note that no explicit calculations of Jacobians or Hessians are necessary to implement this algorithm. Furthermore, the overall number of computations is of the same order as the EKF.

Implementation Variations For the special (but often encountered) case where the process and measurement noise are purely additive, the computational complexity of the UKF can be reduced. In such a case, the system state need not be augmented with the noise RVs. This reduces the dimension of the sigma points as well as the total number of sigma points used. The covariances of the noise source are then incorporated into the state covariance using a simple additive procedure. This implementation is given in Table 7.3. The complexity of the algorithm is of order L3, where L is the dimension of the state. This is the same complexity as the EKF. The most costly operation is in forming the sample prior covariance matrix P^. Depending on the form of F, this may be simplified; for example, for univariate time series or with parameter estimation (see Section 7.4), the complexity reduces to order L2.

232 7 THE UNSCENTED KALMAN FILTER

Table 7.2 Unscented Kalman filter (UKF) equations

Initialize with

X0 = E[X0 ], P0 = E[(x0 ' x T

X 0 = E[xfl] = [XT 0 0]T

Pg = E[(xg - Xg)(xg - Xg)T] =

1 o x 0)T ] ;

P0 0 0

0 Rv 0

0 0 Rn

For k e{1,..., 1}, calculate the sigma points:

X f—1 = [xf—1 xf—1 + gppk—i ^ f—i — gpp£—1]. The time-update equations are

Xkk ,k—i = F(Xk—i, uk—i, (XV—1),

2L

=

i=0

2L

7{m)

X\

i;k I k— 1 ’

P— = E Wi(c)(xxk| k—i — x—)(Xa i k—i — x—)T i=0

Y k I k—1 = H(X kk | k—1, x n—i),

'k I k—1 = H(x k I k

2L

y — = E1

i0

(m)

Y

i k k— 1

and the measurement-update equations are

2L

P

ytyk = E W(c)(Yi, k I k—1 — y—)(Yi, k I k—1 — y—)T

i=0

2L

Pxkyt = E W(c)(X,-, k I k—1 — x—)(Yi, k I k—1 — y—)T

0

1

PP

Kk ±xk yk Py* y*’

x k = x — + Kk (yk — y—) ;

pk = P——Kk Pyt yk KkT ;

where xfl = [x

T T

vT

TT

nT ]

Xfl = [(Xx)T (X v)T (X n)T ]

v\ T

->n\TnT

(7.35)

(7.36)

(7.37)

(7.38)

g

(7.39)

(7.40)

(7.41)

(7.42)

(7.43)

(7.44)

(7.45)

(7.46)

(7.47)

(7.48)

(7.49)

= pL 4- /.

1 is the composite scaling parameter, L is the dimension of the augmented state, Rv is the process-noise covariance, Rn is the measurement-noise covariance, and W are the weights as calculated in Eq. (7.34).

7.3 THE UNSCENTED KALMAN FILTER 233 Table 7.3 UKF - additive (zero mean) noise case

Initialize with

x o — E[xo], (7.50)

Jo = E[(xo — xo)(xo

Po = E[(xo — x o)(xo — x o)T ]. (7.51)

For k e {1,..., 1}, calculate the sigma points:

X k—1 = [x k—1 Xk—1 + W Pk—1 Xk—1 — I'/ Pk—1]. (7.52)

The time-update equations are

X* , k—1 = F(Xk—1, Uk—1), (7.53)

2L

X— — £ W,(m)X*k 1 k—1 (7.54)

i=o

2L

P— — £ W,(c)(X*,k 1 k—1 — X—)(X*k 1 k—1 — X—)T + Rv, (7.55)

i—o

(augment sigma points)6

X k 1 k—1 —[X * k—1 X *,k| k—1 + gVR X *,k| k—1 — gpRv] (7.56)

Y k I k—1 — H(X k | k—1), (7.57)

2L , ,

y— — £ W(m)Ya | k—1, (7.58)

i—o

and the measurement-update equations are

2L

Pytyt — £ Wi(c)(Yi,k Ik—1 — y—)(Yi,k| k—1 — y—)T + Rn, (7.59)

i—o

2L

Pxtyt — £ W(c)(Xi,k| k—1 — X—)(Yi,k| k—1 — y — )T (7.6o)

i—o

Kk — Pxtyt P—i a6i)

X k — X k+Kk (yk — y—) (7.62)

Pk — P— — Kk Pyt yt KkT, (7.63)

where g — VL +1, 1 is the composite scaling parameter, L is the dimension of

the state, Rv is the process-noise covariance, Rn is the measurement-noise

covariance and W are the weights as calculated in Eq. (7.34).

6Here we augment the sigma points with additional points derived from the matrix square root of the process noise covariance. This requires setting L ! 2L and recalculating the various weights W accordingly. Alternatively, we may redraw a complete new set of sigma points, i.e., Xk|k_1 — [X— X— + gyp7 X— — gyp7]. This alternative approach results

in fewer sigma points being used, but also discards any odd-moments information captured by the original propagated sigma points.

234 7 THE UNSCENTED KALMAN FILTER

A number of variations for numerical purposes are also possible. For example, the matrix square root, which can be implemented directly using a Cholesky factorization, is in general of order 6L3. However, the covariance matrices are expressed recursively, and thus the square root can be computed in only order M x L2 (where M is the dimension of the output yk) by performing a recursive update to the Cholesky factorization. Details of an efficient recursive square-root UKF implementation are given in Appendix B.

7.3.1 State-Estimation Examples

The UKF was originally designed for state estimation applied to nonlinear control applications requiring full-state feedback [1-3]. We provide an example for a double inverted pendulum control system. In addition, we provide a new application example corresponding to noisy time-series estimation with neural networks.

**55**> 56 57 58 59 60 61 .. 72 >> Next