Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Elementary Differential Equations and Boundary Value Problems - Boyce W.E.

Boyce W.E. Elementary Differential Equations and Boundary Value Problems - John Wiley & Sons, 2001. - 1310 p.
Download (direct link): elementarydifferentialequations2001.pdf
Previous << 1 .. 358 359 360 361 362 363 < 364 > 365 366 367 368 369 370 .. 609 >> Next

n
E ai (xj) = f (xj ) j = 1... n. (3)
=1
This procedure is known as the method of collocation. It has the advantage that
it is very easy to write down Eqs. (3); one needs only to evaluate the functions involved at the points xv ..., xn .If these points are well chosen, and if n is fairly large, then presumably Sn (x) will not only be equal to f (x) at the chosen points but will be reasonably close to it at other points as well. However, collocation has several deficiencies. One is that if one more base function x is added, then one more point xn+ j is required, and all the coefficients must be recomputed. Thus, it is inconvenient to improve the accuracy of a collocation approximation by including additional terms. Further, the coefficients at depend on the location of the points x1,..., xn, and it is not obvious how best to select these points.
2. Alternatively, we can consider the difference | f (x) Sn (x)| and try to make it
as small as possible. The trouble here is that | f (x) Sn(x)| is a function of x
as well as of the coefficients aj7..., an, and it is not clear how to calculate at. The choice of at that makes | f (x) Sn (x)| small at one point may make it large at another. One way to proceed is to consider instead the least upper bound10 of | f (x) Sn (x)| for x in 0 < x < 1, and then to choose aj,..., an so as to ^n.ake this quantity as small as possible. That is, if
En(a1,..., a ) = lub | f(x) S (x)|, (4)
0<x<1
then choose aj7..., an so as to minimize En. This approach is intuitively appealing, and is often used in theoretical calculations. However, in practice, it is usually
very hard, if not impossible, to write down an explicit formula for En (ajt..., an).
Further, this procedure also shares one of the disadvantages of collocation; namely, on adding an additional term to Sn (x), one must recompute all the preceding coefficients. Thus, it is not often useful in practical problems.
3. Another way to proceed is to consider
In(a1,..., an) =f r(x)| f(x) Sn(x)| dx. (5)
0
If r(x) = 1, then In is the area between the graphs of y = f (x) and y = Sn (x) (see Figure 11.6.1). We can then determine the coefficients at so as to minimize In. To avoid the complications resulting from calculations with absolute values, it is more convenient to consider instead
Rn (ai,..., an) =f r (x)[ f (x) Sn (x)]2 dx (6)
0
as our measure of the quality of approximation of the linear combination Sn (x) to f (x). While Rn is clearly similar in some ways to In, it lacks the simple geometric interpretation of the latter. Nevertheless, it is much easier mathematically to deal
10The least upper bound (lub) is an upper bound that is smaller than any other upper bound. The lub of a bounded function always exists, and is equal to its maximum if the function has one.
11.6 Series of Orthogonal Functions: Mean Convergence
671
with Rn than with In. The quantity Rn is called the mean square error of the approximation Sn to f .If av , an are chosen so as to minimize Rn, then Sn is
said to approximate f in the mean square sense.
To choose av ..., an so as to minimize Rn we must satisfy the necessary conditions d Rn/ at = 0, i = 1,..., n. (7)
Writing out Eq. (7), and noting that Sn (x; a1,..., an)/ at is equal to (x), we obtain R f1
- = 2 r(x)[ f (x) - Sn()](x) dx = 0. (8)
ai Jo n i
Substituting for Sn (x) from Eq. (2) and making use of the orthogonality relation (1) lead to
at = f r(x) f (x)(x) dx, = 1,..., n. (9)
Jo
The coefficients defined by Eq. (9) are called the Fourier coefficients of f with respect to the orthonormal set 1,2,... , and the weight function r. Since the conditions (7) are only necessary and not sufficient for Rn to be a minimum, a separate argument is required to show that Rn is actually minimized if the at are chosen by Eq. (9). This argument is outlined in Problem 5.
Note that the coefficients (9) are the same as those in the eigenfunction series whose convergence, under certain conditions, was stated in Theorem 11.2.4. Thus, Sn(x) is the nth partial sum in this series, and constitutes the best mean square approximation to f (x) that is possible with the functions 1,... ,. We will assume hereafter that the coefficients at in Sn (x) are given by Eq. (9).
Previous << 1 .. 358 359 360 361 362 363 < 364 > 365 366 367 368 369 370 .. 609 >> Next