Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Elementary Differential Equations and Boundary Value Problems - Boyce W.E.

Boyce W.E. Elementary Differential Equations and Boundary Value Problems - John Wiley & Sons, 2001. - 1310 p.
Download (direct link): elementarydifferentialequations2001.pdf
Previous << 1 .. 210 211 212 213 214 215 < 216 > 217 218 219 220 221 222 .. 609 >> Next

I + ? At
n 1 n!
n=1
= A exp (At). (24)
Thus exp(At) satisfies the differential equation
- () = A exp(At). (25)
Further, when t = 0, exp(At) satisfies the initial condition
exp (At)
The fundamental matrix satisfies the same initial value problem as exp (At), namely,
' = , (0) = I. (27)
= I. (26)
t=0
7.7 Fundamental Matrices
Thus we can identify exp (At) with the fundamental matrix ^) and we can write the solution of the initial value problem (19) in the form
x = exp(At)x0, (28)
which is analogous to the solution (18) of the initial value problem (17).
In order to justify more conclusively the use of exp (At) for the sum of the series (22), we should demonstrate that this matrix function does indeed have the properties we associate with the exponential function. One way to do this is outlined in Problem 15.
Diagonalizable Matrices. The basic reason why a system of linear (algebraic or differential) equations presents some difficulty is that the equations are usually coupled. In other words, some or all of the equations involve more than one, typically all, of the unknown variables. Hence the equations in the system must be solved simultaneously. In contrast, if each equation involves only a single variable, then each equation can be solved independently of all the others, which is a much easier task. This observation suggests that one way to solve a system of equations might be by transforming it into an equivalent uncoupled system in which each equation contains only one unknown variable. This corresponds to transforming the coefficient matrix A into a diagonal matrix.
Eigenvectors are useful in accomplishing such a transformation. Suppose that the n x n matrix A has a full set of n linearly independent eigenvectors. Recall that this will certainly be the case if the eigenvalues of A are all different, or if A is Hermitian. Letting ?(1),..., ?(n) denote these eigenvectors and k1y... ,kn the corresponding eigenvalues, form the matrix T whose columns are the eigenvectors, that is,

T
(1)
(29)
ev
Since the columns of T are linearly independent vectors, det T = 0; hence T is nonsingular and T -1 exists. A straightforward calculation shows that the columns of the matrix AT are just the vectors A?(1),..., A?(n). Since A?(k) = Xk?(k), it follows that
^
AT
TD,
(30)
where
D
0
0
\0 0
0
0
/
(31)
is a diagonal matrix whose diagonal elements are the eigenvalues of A. From Eq. (30) it follows that
T -1 AT D.
(32)
Thus, if the eigenvalues and eigenvectors of A are known, A can be transformed into a diagonal matrix by the process shown in Eq. (32). This process is known as a similarity
398
Chapter 7. Systems ofFirst Order Linear Equations
EXAMPLE
3
transformation, and Eq. (32) is summed up in words by saying that A is similar to the diagonal matrix D. Alternatively, we may say that A is diagonalizable. Observe that a similarity transformation leaves the eigenvalues of A unchanged and transforms its eigenvectors into the coordinate vectors e(1),..., e(n).
If A is Hermitian, then the determination of T-1 is very simple. We choose the eigenvectors g(1),..., g(n) of A so that they are normalized by (g(i), g(i)) = 1 for each i, as well as orthogonal. Then it is easy to verify that T-1 = T*; in other words, the inverse of T is the same as its adjoint (the transpose of its complex conjugate).
Finally, we note that if A has fewer than n linearly independent eigenvectors, then there is no matrix T such that T-1AT = D. In this case, A is not similar to a diagonal matrix, and is not diagonalizable.
Consider the matrix
A
1 1 4 1
(33)
Find the similarity transformation matrix T and show that A can be diagonalized.
In Example 1 of Section 7.5 we found that the eigenvalues and eigenvectors of A are
r1 = 3,
g(1> =
2 = - = ( J
(34)
Thus the transformation matrix T and its inverse T-1 are
T
22
T-1_____
Consequently, you can check that
(35)
T-1AT =
01
D.
(36)
1
2
Now let us turn again to the system
x' = Ax, (37)
where A is a constant matrix. In Sections 7.5 and 7.6 we have described how to solve such a system by starting from the assumption that x = ge^. Now we provide another viewpoint, one based on diagonalizing the coefficient matrix A.
According to the results stated just above, it is possible to diagonalize A whenever A has a full set of n linearly independent eigenvectors. Let g(1),..., g(n) be eigenvectors of A corresponding to the eigenvalues rv ..., rn and form the transformation matrix T
Previous << 1 .. 210 211 212 213 214 215 < 216 > 217 218 219 220 221 222 .. 609 >> Next