Books in black and white
 Books Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics

# Elementary differential equations 7th edition - Boyce W.E

Boyce W.E Elementary differential equations 7th edition - Wiley publishing , 2001. - 1310 p.
ISBN 0-471-31999-6
Previous << 1 .. 286 287 288 289 290 291 < 292 > 293 294 295 296 297 298 .. 486 >> Next

Rn(a1,..., an) = f r(x)[ f(x) — Sn(x)]2 dx (6)
Jo
as our measure of the quality of approximation of the linear combination Sn (x) to
f (x). While Rn is clearly similar in some ways to In, it lacks the simple geometric
interpretation of the latter. Nevertheless, it is much easier mathematically to deal
10The least upper bound (lub) is an upper bound that is smaller than any other upper bound. The lub of a bounded function always exists, and is equal to its maximum if the function has one.
11.6 Series of Orthogonal Functions: Mean Convergence
671
with Rn than with In. The quantity Rn is called the mean square error of the approximation Sn to f .If ax,, an are chosen so as to minimize Rn, then Sn is said to approximate f in the mean square sense.
To choose ax,..., an so as to minimize Rn we must satisfy the necessary conditions d Rn/d at = 0, i = 1,..., n. (7)
Writing out Eq. (7), and noting that d Sn(x; ai,..., an)/dat is equal to 0(x), we obtain d R f1
= 2 r(x)[ f(x) - Sn(x)]Qt (x) dx = 0. (8)
d at Jo n i
Substituting for Sn (x) from Eq. (2) and making use of the orthogonality relation (1) lead to
at = f r(x) f (x)0t(x) dx, i = 1,..., n. (9)
Jo
The coefficients defined by Eq. (9) are called the Fourier coefficients of f with respect to the orthonormal set 01,02,... ,0n and the weight function r. Since the conditions (7) are only necessary and not sufficient for Rn to be a minimum, a separate argument is required to show that Rn is actually minimized if the at are chosen by Eq. (9). This argument is outlined in Problem 5.
Note that the coefficients (9) are the same as those in the eigenfunction series whose convergence, under certain conditions, was stated in Theorem 11.2.4. Thus, Sn (x) is the nth partial sum in this series, and constitutes the best mean square approximation to f (x) that is possible with the functions 0V ... ,0 . We will assume hereafter that the coefficients at in Sn (x) are given by Eq. (9).
Equation (9) is noteworthy in two other important respects. In the first place, it gives a formula for each at separately, rather than a set of linear algebraic equations for a1,..., an as in the method of collocation, for example. This is due to the orthogonality of the base functions ... . Further, the formula for at is independent of n, the
number of terms in Sn (x). The practical significance of this is as follows. Suppose that, to obtain a better approximation to f, we desire to use an approximation with more terms, say, k terms, where k > n. It is then unnecessary to recompute the first n coefficients in Sk (x). All that is required is to compute from Eq. (9) the coefficients an+1,..., ak arising from the additional base functions 0 1,... ,4>k. Of course, if
672
Chapter 11. Boundary Value Problems and SturmLiouville Theory
f, r, and the 0n are complicated functions, it may be necessary to evaluate the integrals numerically.
Now let us suppose that there is an infinite sequence of functions 01,... ,0 ,, which are continuous and orthonormal on the interval 0 < x < 1. Suppose further that, as n increases without bound, the mean square error Rn approaches zero. In this event the infinite series
TO
IZai 0i(x)
i=1
is said to converge in the mean square sense (or, more simply, in the mean) to f (x). Mean convergence is an essentially different type of convergence than the pointwise convergence considered up to now. A series may converge in the mean without converging at each point. This is plausible geometrically because the area between two curves, which behaves in the same way as the mean square error, may be zero even though the functions are not the same at every point. They may differ on any finite set of points, for example, without affecting the mean square error. It is less obvious, but also true, that even if an infinite series converges at every point, it may not converge in the mean. Indeed, the mean square error may even become unbounded. An example of this phenomenon is given in Problem 4.
Now suppose that we wish to know what class of functions, defined on 0 < x < 1, can be represented as an infinite series of the orthonormal set 0i, i = 1, 2,.... The answer depends on what kind of convergence we require. We say that the set 0V ... ,0 ,... is complete with respect to mean square convergence for a set of functions F, if for each function f in F, the series
TO
f (x) = Yh ai0i (x), (10)
i=1
with coefficients given by Eq. (9), converges in the mean. There is a similar definition for completeness with respect to pointwise convergence.
Theorems having to do with the convergence of series such as that in Eq. (10) can now be restated in terms of the idea of completeness. For example, Theorem 11.2.4 can be restated as follows: The eigenfunctions of the Sturm-Liouville problem
- [p(x)yy + q(x)y = Xr(x)y, 0 < x < 1, (11)
a1 y(0) + a2 y(0) = 0, b1 y(1) + b2/(1) = 0 (12)
are complete with respect to ordinary pointwise convergence for the set of functions that are continuous on 0 < x < 1, and have a piecewise continuous derivative there.
Previous << 1 .. 286 287 288 289 290 291 < 292 > 293 294 295 296 297 298 .. 486 >> Next