Download (direct link):
/ n \ 1
$() = tan ^t + -j , &2(t) = ^yt. (7)
Thus &2(t) ^?as t ^ 1, and ^1(t) ^?as t ^ n/4 = 0.785. These calculations show that the solution of the original initial value problem exists at least for 0 < t <n/4 and at most for 0 < t < 1. The solution of the problem (3) has a vertical asymptote for some t in n/4 < t < 1 and thus does not exist on the entire interval 0 < t < 1.
Our numerical calculations, however, suggest that we can go beyond t = n/4, and probably beyond t = 0.9. Assuming that the solution of the initial value problem exists at t = 0.9 and has the value 14.305, we can obtain a more accurate appraisal of what happens for larger t by considering the initial value problems (5) and (6) with y(0) = 1 replaced by y(0.9) = 14.305. Then we obtain
01 (t) = tan(t + 0.60100), 02(t) = 1/(0.96991 - t), (8)
where only five decimal places have been kept. Thus ^1(t) as t ^ n/2 -
0.60100 = 0.96980 and &2(t) as t ^ 0.96991. We conclude that the asymptote of the solution of the initial value problem (3) lies between these two values. This example illustrates the sort of information that can be obtained by a judicious combination of analytical and numerical work.
Stability. The concept of stability is associated with the possibility that small errors that are introduced in the course of a mathematical procedure may die out as the procedure continues. Conversely, instability occurs if small errors tend to increase, perhaps without bound. For example, in Section 2.5 we identified equilibrium solutions of a differential equation as (asymptotically) stable or unstable, depending on whether solutions that were initially near the equilibrium solution tended to approach it or to depart from it as t increases. Somewhat more generally, the solution of an initial value problem is asymptotically stable if initially nearby solutions tend to approach the given solution, and unstable if they tend to depart from it. Visually, in an asymptotically stable problem the graphs of solutions will come together, while in an unstable problem they will separate.
8.5 More on Errors; Stability
If we are solving an initial value problem numerically, the best that we can hope for is that the numerical approximation will mimic the behavior of the actual solution. We cannot make an unstable problem into a stable one merely by solving it numerically. However, it may well happen that a numerical procedure will introduce instabilities that were not part of the original problem, and this can cause trouble in approximating the solution. Avoidance of such instabilities may require us to place restrictions on the step size h.
To illustrate what can happen in the simplest possible context consider the differential equation
dy/dt = ry, (9)
where r is a constant. Suppose that in solving this equation we have reached the point ( tn, yn). Let us compare the exact solution of Eq. (9) that passes through this point, namely,
7 = Yn exP[r(t — n)]> with numerical approximations obtained from the Euler formula
7n+i = Yn + hf(tn> Yn)
and from the backward Euler formula
Yn+i = Yn + hf(tn+v Yn+i)-From the Euler formula (11) we obtain
Yn+i = Yn + hrYn = Yn(1 + rh)-Similarly, from the backward Euler formula (12) we have
Yn+i = Yn + hrYn+1,
Yn = Yn [1 + rh + (rh)2 + •••].
1 — rh
Finally, evaluating the solution (10) at t + h, we find that
Yn+i = Yn exP(rh) = Yn
1 + rh +
Comparing Eqs. (13), (14), and (15) we see that the errors in both the Euler formula and in the backward Euler formula are of order h2, as the theory predicts.
Now suppose that we change the value yn to yn + S. Think, if you wish, of S as the error that has accumulated by the time we reach t = tn. In going one more step to tn+1, the question is whether this error increases or decreases.
For the exact solution (15) the change in yn+1 due to the change S in yn is just S exp(rh). This quantity is less than S if exp(rh) < 1, that is, if r < 0. This confirms our conclusion in Chapter 2 that Eq. (9) is asymptotically stable if r < 0 and unstable if r > 0.
For the backward Euler method the change in yn+1 in Eq. (14) due to S is S/(1 — rh). For r < 0 this quantity is always nonnegative and never greater than 1. Thus, if the differential equation is stable, then so is the backward Euler method for an arbitrary step size h.
Chapter 8. Numerical Methods
On the other hand, for the Euler method, the change in yn+1 in Eq. (13) due to S is 5(1 + rh). If we recall that r < 0 and require that |1 + rh| < 1, then we find that h must satisfy h < 2/|r |. Thus the Euler method is not stable for this problem unless h is sufficiently small.
The restriction on the step size h in using the Euler method in the preceding example is rather mild unless |r | is quite large. Nonetheless, the example illustrates that it may be necessary to restrict h in order to achieve stability in the numerical method, even though the initial value problem itself is stable for all values of h. Problems for which a much smaller step size is needed for stability than for accuracy are called stiff. The backward differentiation formulas described in Section 8.4 (of which the backward Euler formula is the lowest order example) are the most popular formulas for solving stiff problems. The following example illustrates the kind of instabilities that can occur in trying to approximate the solution of a stiff problem.