Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Elementary differential equations 7th edition - Boyce W.E

Boyce W.E Elementary differential equations 7th edition - Wiley publishing , 2001. - 1310 p.
ISBN 0-471-31999-6
Download (direct link): elementarydifferentialequat2001.pdf
Previous << 1 .. 285 286 287 288 289 290 < 291 > 292 293 294 295 296 297 .. 486 >> Next

p2upp + 2pup + (csc2 0)uee + u00 + (cot0)u0 = a
(a) Showthatif u(p,Q,0) = P(p)©(Q)®(0),thenP, ©,and ® satisfy ordinary differential equations of the form
p2P" + 2pP' - [i2P = 0,
©" + k2© = 0,
(sin2 0)&' + (sin 0 cos 0)& + (p} sin2 0 - k2)® = 0.
11.6 Series of Orthogonal Functions: Mean Convergence
669
The first of these equations is of the Euler type, while the third is related to Legendre’s equation.
(b) Show that if u(p, 6,0) is independent of 6, then the first equation in part (a) is unchanged, the second is omitted, and the third becomes
(sin2 0)®" + (sin 0 cos 0)®' + (p2 sin2 0)® = 0.
(c) Show that if a new independent variable is defined by s = cos 0, then the equation for ® in part (b) becomes
2 d2® „ d® 2
(1 - s )—^— 2s----------+ p2® = 0, -1 < s < 1.
ds2 ds
Note that this is Legendre’s equation.
10. Find the steady-state temperature u(p, 0) in a sphere of unit radius if the temperature is independent of 6 and satisfies the boundary condition
u(1, 0) = f(0), 0 < 0 < n.
Hint: Refer to Problem 9 and to Problems 22 through 29 of Section 5.3. Use the fact that the only solutions of Legendre’s equation that are finite at both ±1 are the Legendre polynomials.
11.6 Series of Orthogonal Functions: Mean Convergence
In Section 11.2 we stated that under certain restrictions a given function f can be expanded in a series of eigenfunctions of a Sturm-Liouville boundary value problem, the series converging to [ f (x+) + f (x —)]/2 at each point in the open interval. Under somewhat more restrictive conditions the series converges to f (x) at each point in the closed interval. This type of convergence is referred to as pointwise convergence. In this section we describe a different kind of convergence that is especially useful for series of orthogonal functions, such as eigenfunctions.
Suppose that we are given the set of functions 01,02,... ,0n, which are continuous on the interval 0 < x < 1 and satisfy the orthonormality condition
r(x)0i(x)0j(x) dx = j!’ i = j (1)
where r is a nonnegative weight function. Suppose also that we wish to approximate a given function f, defined on 0 < x < 1, by a linear combination of 0V ... ,0 . That is, if 1 n
n
Sn (x) = ^2, ai0i (x), (2)
i=1
we wish to choose the coefficients a1,..., an so that the function Sn will best approximate f on 0 < x < 1. The first problem that we must face in doing this is to state precisely what we mean by “best approximate f on 0 < x < 1.” There are several reasonable meanings that can be attached to this phrase.
670
Chapter 11. Boundary Value Problems and Sturm Liouville Theory
1. We can choose n points xv ..., xn in the interval 0 < x < 1, and require that Sn (x) have the same value as f(x) at each of these points. The coefficients ai,, an are found by solving the set of linear algebraic equations
n
^2 at0 (xj) = f (xj), j = 1, . ., n. (3)
i=1
This procedure is known as the method of collocation. It has the advantage that it is very easy to write down Eqs. (3); one needs only to evaluate the functions involved at the points xi,..., xn .If these points are well chosen, and if n is fairly large, then presumably Sn(x) will not only be equal to f (x) at the chosen points but will be reasonably close to it at other points as well. However, collocation has several deficiencies. One is that if one more base function 0 i is added, then one more point xn+1 is required, and all the coefficients must be recomputed. Thus, it is inconvenient to improve the accuracy of a collocation approximation by including additional terms. Further, the coefficients at depend on the location of the points x1,..., xn, and it is not obvious how best to select these points.
2. Alternatively, we can consider the difference | f (x) — Sn(x)| and try to make it as small as possible. The trouble here is that | f (x) — Sn(x)| is a function of x as well as of the coefficients a1,..., an, and it is not clear how to calculate at. The choice of at that makes | f (x) — Sn (x)| small at one point may make it large at another. One way to proceed is to consider instead the least upper bound10 of | f(x) — Sn(x)| for x in 0 < x < 1, and then to choose a1,..., an so as to make this quantity as small as possible. That is, if
En(av ..., a ) = lub | f(x) — S (x)|, (4)
0<x<1
then choose a1,..., an so as to minimize En. This approach is intuitively appealing, and is often used in theoretical calculations. However, in practice, it is usually very hard, if not impossible, to write down an explicit formula for En a,..., an). Further, this procedure also shares one of the disadvantages of collocation; namely, on adding an additional term to Sn (x), one must recompute all the preceding coefficients. Thus, it is not often useful in practical problems.
3. Another way to proceed is to consider
In(a1,..., an) = f r(x)| f(x) — Sn(x)| dx. (5)
Jo
If r(x) = 1, then In is the area between the graphs of y = f (x) and y = Sn (x) (see Figure 11.6.1). We can then determine the coefficients at so as to minimize In. To avoid the complications resulting from calculations with absolute values, it is more convenient to consider instead
Previous << 1 .. 285 286 287 288 289 290 < 291 > 292 293 294 295 296 297 .. 486 >> Next