Download (direct link):
c^'"' + c2x solutions, then x = c1x
Assume that if x (1)
+ + ckx
is a solution.
Then use Theorem 7.4.1 to conclude that x + ck+ix(k+1) is
also a solution and thus c1x(1) +
+ ck+1x(k+1) is a
solution if x(1), ..., x(k+1) are solutions. 2a. From Eq.(10) we have W =
= x11) x(2)
derivative of these two products yields four terms which may be written as
dW — = [ dt
] + [ x1
dt dt dt dt The terms in the square brackets can now be recognized as the respective determinants appearing in the desired solution. A similar result was mentioned in Problem 20 of Section 4.1.
is substituted into Eq.(3) we have = pn x11) + p12 x21)
dx11) dt dx21) dt
Substituting the first equation above and its counterpart
= p21 x11) + p22 x21)
into the first determinant appearing in dW/dt
and evaluating the result yields p11
Similarly, the second determinant in dW/dt is evaluated as p22W, yielding the desired result.
From prt b we have — = [p11(t) + p22(t)]dt which gives
W(t) = c expl[pn(t) + p22 (t)]dt.
6a. W =
t2 = t2.
t t2 1 2t
6b. Pick t = t0, then c1x(1) (t0) + c2x(2) (t0) = 0 implies
, which has a non-zero solution
t 20 ' 0 '
c1 + c2 = 0
V 1 J v 2t0 j
t02 = t20 = 0.
for c1 and c2 if and only if
Thus x(1) (t) and x(2) (t) are linearly independent at each point except t = 0. Thus they are linearly independent on every interval.
6c. From part a we see that the Wronskian vanishes at t = 0, but
not at any other point. By Theorem 7.4.3, if p(t), from
Eq.(3), is continuous, then the Wronskian is either identically zero or else never vanishes. Hence, we conclude that the D.E. satisfied by x(1) (t) and x(2) (t) must have at
least one discontinuous coefficient at t = 0.
6d. To obtain the system satisfied by x(1) and x
x = c1x(1) + c2x(2), or
/ \ x1 x2
( ' \ / - ë / ^ \
x1 1 1 2t
= ci 1 + c2
v x2, V 0 J V 2 Ó
Taking the derivative we obtain
Solving this last system for c1 and c2 we find c1 = x1 - tx2 and c2 = x2/2. Thus
, which yields
x1 = tx1 - — x2 and x2 = x1. Writing this system in
/ \ ^ t ^ x2 2
= (x1 - tx2) + -
V x2 ó v 1 , 2 V 2t Ó
matrix form we have x =
t - t2/2
x'. Finding the
inverse of the matrix multiplying x' yields the desired solution.
Section 7.5, Page 381
1. Assuming that there are solutions of the form x = ?ert, we substitute into the D.E. to find
write this equation as
thus we must solve
- r V
& ë / „
Ë2 ó V 0
Ë, we can Ë = 0 and for r, ^ Ë2.
1 0 0 1/
The determinant of the coefficients is (3-r)(-2-r) + 4 = r2 - r - 2, so the eigenvalues are r = -1, 2. The eigenvector corresponding to r = -1
Thus x(1)(t) = Ë(1)º
' 4 -2 ' % ' ' 0 ë
V 2 -1, V Ë2 ó V 0 ,
, which yields 2^1 - Ë2 = 0.
e , where we have set Ë>1 = 1.
(Any other non zero choice would also work). In a similar fashion, for r = 2, we have
or Ë1 - 2Ë2 = 0. Hence x(2)(t) = ?(2)e2t = e2t by
/ . . \ / '\
2 Ë1 0
V 2 4 Ó V Ë2 ó v 0 ó
Ê 2 -2 j
Ê 2 -2 J
^ 2 -2-r)
Ê 2 )
setting ?2 = 1. The general solution is then
x = c1x(1) (t) + c2x(2 ^ (t). To sketch the trajectories we
follow the steps illustrated in Examples 1 and 2.
/ \ ' 1'
Setting c2 = 0 we have x = = c1
V x2 ó V 2 J
e or x1
and x2 = 2c1e-t and thus one asymptote is given by x2 = 2x1. In a similar fashion c1 = 0 gives x2 = (1/2)x1 as a second
asymptote. Since the roots differ in sign, the trajectories for this problem are similar in nature to those in Example 1. For c2 Ô 0, all solutions will be
asymptotic to x2 = (1/2)xx as t ^ ^. For c2 = 0, the
solution approaches the origin along the line x2 = 2x^