# Kalman Filtering Neural Networks - Haykin S.

ISBNs: 0-471-36998-5

**Download**(direct link)

**:**

**3**> 4 5 6 7 8 9 .. 72 >> Next

Gintaras V. Puskorius, Ford Research Laboratory, Ford Motor Company, 2101 Village Road, Dearborn, MI 48121-2053, U.S.A.

Ron Racine, Department of Psychology, McMaster University, 1280 Main Street West, Hamilton, ON, Canada L8S 4K1

Sam T. Roweis, Gatsby Computational Neuroscience Unit, University College London, Alexandra House, 17 Queen Square, London WC1N 3AR, U.K.

Rudolph van der Merwe, Department of Electrical and Computer Engineering, Oregon Graduate Institute of Science and Technology, 19600 N.W. von Neumann Drive, Beaverton, OR 97006-1999, U.S.A.

Eric A. Wan, Department of Electrical and Computer Engineering, Oregon Graduate Institute of Science and Technology, 19600 N.W. von Neumann Drive, Beaverton, OR 97006-1999, U.S.A.

xiii

KALMAN FILTERING AND NEURAL NETWORKS

Adaptive and Learning Systems for Signal Processing, Communications, and Control Editor: Simon Haykin

Beckerman / ADAPTIVE COOPERATIVE SYSTEMS

Chen and Gu / CONTROL-ORIENTED SYSTEM IDENTIFICATION: An Approach

Cherkassky and Mulier / LEARNING FROM DATA: Concepts, Theory, and Methods

Diamantaras and Kung / PRINCIPAL COMPONENT NEURAL NETWORKS: Theory and Applications

Haykin / KALMAN FILTERING AND NEURAL NETWORKS

Haykin / UNSUPERVISED ADAPTIVE FILTERING: Blind Source Separation

Haykin / UNSUPERVISED ADAPTIVE FILTERING: Blind Deconvolution

Haykin and Puthussarypady / CHAOTIC DYNAMICS OF SEA CLUTTER

Hrycej / NEUROCONTROL: Towards an Industrial Control Methodology

Hyvarinen, Karhunen, and Oja / INDEPENDENT COMPONENT ANALYSIS

Kristic, Kanellakopoulos, and Kokotovic / NONLINEAR AND ADAPTIVE CONTROL DESIGN

Nikias and Shao / SIGNAL PROCESSING WITH ALPHA-STABLE DISTRIBUTIONS AND APPLICATIONS

Passino and Burgess / STABILITY ANALYSIS OF DISCRETE EVENT SYSTEMS

Sanchez-Pena and Sznaler / ROBUST SYSTEMS THEORY AND APPLICATIONS

Sandberg, Lo, Fancourt, Principe, Katagiri, and Haykin / NONLINEAR DYNAMICAL SYSTEMS: Feedforward Neural Network Perspectives

Tao and Kokotovic / ADAPTIVE CONTROL OF SYSTEMS WITH ACTUATOR AND SENSOR NONLINEARITIES

Tsoukalas and Uhrig / FUZZY AND NEURAL APPROACHES IN ENGINEERING

Van Hulle / FAITHFUL REPRESENTATIONS AND TOPOGRAPHIC MAPS: From Distortion- to Information-Based Self-Organization

Vapnik / STATISTICAL LEARNING THEORY

Werbos / THE ROOTS OF BACKPROPAGATION: From Ordered Derivatives to Neural Networks and Political Forecasting

Kalman Filtering and Neural Networks, Edited by Simon Haykin Copyright # 2001 John Wiley & Sons, Inc. ISBNs: 0-471-36998-5 (Hardback); 0-471-22154-6 (Electronic)

1

KALMAN FILTERS

Simon Haykin

Communications Research Laboratory, McMaster University,

Hamilton, Ontario, Canada (haykin@mcmaster.ca)

1.1 INTRODUCTION

The celebrated Kalman filter, rooted in the state-space formulation of linear dynamical systems, provides a recursive solution to the linear optimal filtering problem. It applies to stationary as well as nonstationary environments. The solution is recursive in that each updated estimate of the state is computed from the previous estimate and the new input data, so only the previous estimate requires storage. In addition to eliminating the need for storing the entire past observed data, the Kalman filter is computationally more efficient than computing the estimate directly from the entire past observed data at each step of the filtering process.

In this chapter, we present an introductory treatment of Kalman filters to pave the way for their application in subsequent chapters of the book. We have chosen to follow the original paper by Kalman [1] for the

1

2 1 KALMAN FILTERS

derivation; see also the books by Lewis [2] and Grewal and Andrews [3]. The derivation is not only elegant but also highly insightful.

Consider a linear, discrete-time dynamical system described by the block diagram shown in Figure 1.1. The concept of state is fundamental to this description. The state vector or simply state, denoted by xk, is defined as the minimal set of data that is sufficient to uniquely describe the unforced dynamical behavior of the system; the subscript k denotes discrete time. In other words, the state is the least amount of data on the past behavior of the system that is needed to predict its future behavior. Typically, the state xk is unknown. To estimate it, we use a set of observed data, denoted by the vector yk.

In mathematical terms, the block diagram of Figure 1.1 embodies the following pair of equations:

1. Process equation

xk+1 - Fk+1,k xk + wk; (L1)

where Fk+1k is the transition matrix taking the state xk from time k to time k + 1. The process noise wk is assumed to be additive, white, and Gaussian, with zero mean and with covariance matrix defined by

EKwf] = {Qk £ n -k; d-2)

where the superscript T denotes matrix transposition. The dimension of the state space is denoted by M.

Process equation Measurement equation

Figure 1.1 Signal-flow graph representation of a linear, discrete-time dynamical system.

1.2 OPTIMUM ESTIMATES 3

2. Measurement equation

**3**> 4 5 6 7 8 9 .. 72 >> Next