Books
in black and white
Main menu
Home About us Share a book
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Kalman Filtering Neural Networks - Haykin S.

Haykin S. Kalman Filtering Neural Networks - Wiley publishing , 2001. - 202 p.
ISBNs: 0-471-36998-5
Download (direct link): kalmanfilteringneuralnetworks2001.pdf
Previous << 1 .. 24 25 26 27 28 29 < 30 > 31 32 33 34 35 36 .. 72 >> Next

3.6 EXPERIMENT 3
In Experiment 1, the network was presented with short sequences (four images) of only two shapes (circle and triangle), and in experiment 2 an extra shape (square) was added. In Experiment 3, to make the learning task even more challenging, the length of the sequences was increased to 10 and the restriction of one direction of motion per shape was lifted. Specifically, each shape was permitted to move right and either up or down. Thus, the network was exposed to different shapes traveling in similar directions and also the same shape traveling in different directions, increasing the total number of images presented to the network from 8 images in Experiment 1 and 12 images in Experiment 2 to 100 images in this experiment. In effect, there is a substantial increase in the number of learning patterns, and thus a substantial increase in the complexity of the learning task. However, since the number of weights in the network is limited and remains the same as in the other experiments, the network cannot simply memorize the sequences.
We trained a network of the same 100-16-8R-100 architecture on six sequences, each consisting of 10 images (see Fig. 3.4) in the following order:
• circle moving right and up;
• square moving right and down;
• triangle moving right and up;
• circle moving right and down;
• square moving right and up;
• triangle moving right and down.
Training was performed in a similar manner as Experiment 2. During testing, the order of presentation of the six sequences was varied; several examples are shown in Figure 3.5. As in the previous experiments, even with the larger number of training patterns, the network is able to predict the correct motion of the shapes, only failing during transitions between shapes. It is able to distinguish between the same shapes moving in different directions as well as different shapes moving in the same direction, using context available via the recurrent connections.
3.7 DISCUSSION 77
Figure 3.4 Experiment 3: six image sequences used for training.
The failure of the model to make accurate predictions at transitions between shapes can also be seen in the residual error that is obtained during prediction. The residual error in the predicted image is quantified by calculating the mean-squared prediction error, as shown in Figure 3.6. The figure shows how the mean-squared prediction error varies as the prediction continues. Note the transient increase in error at transitions between shapes.
3.7 DISCUSSION
In this chapter, we have dealt with time-series prediction of high-dimensional signals: moving visual images. This situation is much more
78 3 LEARNING SHAPE AND MOTION FROM IMAGE SEQUENCES
“““«AAA,
QQC XJWWv ' • • • « A A A A,
OQOU^'“'w ~ ' n ULJl—IL-J'—1
QO< it/ ^ * ' 1 i i,-.| |i M i> " >■-■•

1 \Kx\
vwOOOA AAA««““"' H "“'A.’uU)00 ‘“’“'»AAA,
•a » . - ’ 11 " •• "••“Ama
IDUUUU^ aAA««“““"
I II M f II " " 1 /Ja A A A***''*" '
nQQUUuui.^- “"“UÜAAA,
IJI II II II 1 II • ' “'•“««AAAi
TjUUU-^“ rrjUUL-i^^^-^ (Jl II |I H II " ” ■" •• '• 'I HUH" n ••
’ ■ jf
(Jl 11 1» 1! « " M M >l M N l( h
Figure 3.5 Experiment 3: one-step prediction of image sequences using the trained network. The three rows in each image correspond to input, prediction, and error, respectively.
complicated than a one-dimensional case, in that the system has to deal with simultaneous shape and motion prediction. The network was trained by the EKF method to perform one-step prediction of image sequences in a specific order. Then, during testing, the order of the sequences was varied and the network was asked to predict the correct shape and location of the next image in the sequence. The complexity of the problem was increased from Experiment 1 to 3 as we introduced occlusions, increased both the length of the training sequences and the number of shapes presented, and allowed shape and motion to vary independently. In all
3.7 DISCUSSION 79
Prediction step Prediction step
Prediction step Prediction step
Figure 3.6 Mean-squared prediction error in one-step prediction of image sequences using the trained network. The three rows in each image correspond to input, prediction, and error, respectively. The graphs show how the mean-squared prediction error varies as the prediction progresses. Notice the increase in error at transitions between shapes.
cases, the network was able to predict the correct motion of the shapes, failing only momentarily at transitions between shapes.
The network described here is a first step toward modeling the mechanisms by which the human brain might simultaneously recognize and track moving stimuli. Any attempt to model both shape and motion processing simultaneously within a single network may seem to be at odds with the well-established finding that shape and spatial information are processed in separate pathways of the visual system [5]. An extreme version of this view posits that form-related features are processed strictly by the ventral ‘‘what’’ pathway and motion features are processed strictly
Previous << 1 .. 24 25 26 27 28 29 < 30 > 31 32 33 34 35 36 .. 72 >> Next