Download (direct link):
Consider again the initial value problem
/= 1 - t + 4y, y(0) = 1. (11)
With a step size of h = 0.1 determine an approximate value of the solution y = 0(t) at t = 0.4 using the fourth order Adams-Bashforth formula, the fourth order Adams-Moulton formula, and the predictor-corrector method.
For starting data we use the values of yv y2, and y3 found from the Runge-Kutta method. These are tabulated in Table 8.3.1. Next, calculating the corresponding values of f (t, y), we obtain
yo = 1, fo = 5,
y1 = 1.6089333, f1 = 7.3357332,
y2 = 2.5050062, f2 = 10.820025,
y3 = 3.8294145, f3 = 16.017658.
Then from the Adams-Bashforth formula, Eq. (6), we find that y4 = 5.7836305. The exact value of the solution at t = 0.4, correct through eight digits, is 5.7942260, so the error is -0.010595.
The Adams-Moulton formula, Eq. (10), leads to the equation
y4 = 4.9251275 + 0.15y4,
from which it follows that y4 = 5.7942676 with an error of only 0.0000416.
Finally, using the result from the Adams-Bashforth formula as a predicted value of 0(0.4), we can then use Eq. (10) as a corrector. Corresponding to the predicted value of y4 we find that f4 = 23.734522. Hence, from Eq. (10), the corrected value of y4 is 5.7926721. This result is in error by -0.0015539.
Observe that the Adams-Bashforth method is the simplest and fastest of these methods, since it involves only the evaluation of a single explicit formula. It is also the least accurate. Using the Adams-Moulton formula as a corrector increases the amount of calculation that is required, but the method is still explicit. In this problem the error in the corrected value of y4 is reduced by approximately a factor of 7 when compared to the error in the predicted value. The Adams-Moulton method alone yields by far the best result, with an error that is about 1/40 as large as the error from the predictor-corrector method. Remember, however, that the Adams-Moulton method is implicit, which means that an equation must be solved at each step. In the problem considered here this equation is linear, so the solution is quickly found, but in other problems this part of the procedure may be much more time-consuming.
The Runge-Kutta method with h = 0.1 gives y4 = 5.7927853 with an error of -0.0014407; see Table 8.3.1. Thus for this problem the Runge-Kutta method is comparable in accuracy to the predictor-corrector method.
8.4 Multistep Methods
Backward Differentiation Formulas. Another type of multistep method arises by using a polynomial Pk (t ) to approximate the solution 0(t ) of the initial value problem (1) rather than its derivative 0'(t), as in the Adams methods. We then differentiate Pk (t) and set Pk (tn+1) equal to f (tn+p yn+1) to obtain an implicit formula for yn+1. These are called backward differentiation formulas.
The simplest case uses a first degree polynomial P1(t) = At + B. The coefficients are chosen to match the computed values of the solution yn and yn+1. Thus A and B must satisfy
At + B = y ,
a u (12)
Atn+1 + B = yn+v
Since p (t) = A, the requirement that
P^(tn+1) = f (tn+1, yn+1)
A = f(tn+V yn+1). (13)
Another expression for A comes from subtracting the first of Eqs. (12) from the second, which gives
A = (yn+1 - yn)/h.
Substituting this value of A into Eq. (13) and rearranging terms, we obtain the first order backward differentiation formula
yn+1 = yn + hf(tn+1, yn+1). (14)
Note that Eq. (14) is just the backward Euler formula that we first saw in Section 8.1.
By using higher order polynomials and correspondingly more data points one can obtain backward differentiation formulas of any order. The second order formula is
yn+1 = 1 [4yn - yn-1 + 2hf (tn+1, yn+1)] (15)
and the fourth order formula is
yn+1 = 25 [48yn - 36yn-1 + 16yn-2 - 3yn-3 + 12hf(tn+V yn+1)] . (16)
These formulas have local truncation errors proportional to h3 and h5, respectively.
Use the fourth order backward differentiation formula with h = 0.1 and the data given in Example 1 to determine an approximate value of the solution y = 0(t) at t = 0.4 for the initial value problem (11).
Using Eq. (16) with n = 3, h = 0.1, and with y0,..., y3 given in Example 1, we obtain the equation
y4 = 4.6837842 + 0.192y4.
y4 = 5.7967626.
Chapter 8. Numerical Methods
Comparing the calculated value with the exact value 0(0.4) = 5.7942260, we find that the error is 0.0025366. This is somewhat better than the result using the Adams-Bashforth method, but not as good as the result using the predictor-corrector method, and not nearly as good as the result using the Adams-Moulton method.
A comparison between one-step and multistep methods must take several factors into consideration. The fourth order Runge-Kutta method requires four evaluations of f at each step, while the fourth order Adams-Bashforth method (once past the starting values) requires only one and the predictor-corrector method only two. Thus, for a given step size h, the latter two methods may well be considerably faster than Runge-Kutta. However, if Runge-Kutta is more accurate and therefore can use fewer steps, then the difference in speed will be reduced and perhaps eliminated. The Adams-Moulton and backward differentiation formulas also require that the difficulty in solving the implicit equation at each step be taken into account. All multistep methods have the possible disadvantage that errors in earlier steps can feed back into later calculations with unfavorable consequences. On the other hand, the underlying polynomial approximations in multistep methods make it easy to approximate the solution at points between the mesh points, should this be desirable. Multistep methods have become popular largely because it is relatively easy to estimate the error at each step and to adjust the order or the step size to control it. For a further discussion of such questions as these see the books listed at the end of the chapter; in particular, Shampine (1994) is an authoritative source.