Download (direct link):
I + ? At
n 1 n!
= A exp (At). (24)
Thus exp(At) satisfies the differential equation
- åõð(Ì) = A exp(At). (25)
Further, when t = 0, exp(At) satisfies the initial condition
The fundamental matrix Ô satisfies the same initial value problem as exp (At), namely,
Ô' = ÀÔ, Ô(0) = I. (27)
= I. (26)
7.7 Fundamental Matrices
Thus we can identify exp (At) with the fundamental matrix Ô^) and we can write the solution of the initial value problem (19) in the form
x = exp(At)x0, (28)
which is analogous to the solution (18) of the initial value problem (17).
In order to justify more conclusively the use of exp (At) for the sum of the series (22), we should demonstrate that this matrix function does indeed have the properties we associate with the exponential function. One way to do this is outlined in Problem 15.
Diagonalizable Matrices. The basic reason why a system of linear (algebraic or differential) equations presents some difficulty is that the equations are usually coupled. In other words, some or all of the equations involve more than one, typically all, of the unknown variables. Hence the equations in the system must be solved simultaneously. In contrast, if each equation involves only a single variable, then each equation can be solved independently of all the others, which is a much easier task. This observation suggests that one way to solve a system of equations might be by transforming it into an equivalent uncoupled system in which each equation contains only one unknown variable. This corresponds to transforming the coefficient matrix A into a diagonal matrix.
Eigenvectors are useful in accomplishing such a transformation. Suppose that the n x n matrix A has a full set of n linearly independent eigenvectors. Recall that this will certainly be the case if the eigenvalues of A are all different, or if A is Hermitian. Letting ?(1),..., ?(n) denote these eigenvectors and k1y... ,kn the corresponding eigenvalues, form the matrix T whose columns are the eigenvectors, that is,
Since the columns of T are linearly independent vectors, det T = 0; hence T is nonsingular and T -1 exists. A straightforward calculation shows that the columns of the matrix AT are just the vectors A?(1),..., A?(n). Since A?(k) = Xk?(k), it follows that
is a diagonal matrix whose diagonal elements are the eigenvalues of A. From Eq. (30) it follows that
T -1 AT D.
Thus, if the eigenvalues and eigenvectors of A are known, A can be transformed into a diagonal matrix by the process shown in Eq. (32). This process is known as a similarity
Chapter 7. Systems ofFirst Order Linear Equations
transformation, and Eq. (32) is summed up in words by saying that A is similar to the diagonal matrix D. Alternatively, we may say that A is diagonalizable. Observe that a similarity transformation leaves the eigenvalues of A unchanged and transforms its eigenvectors into the coordinate vectors e(1),..., e(n).
If A is Hermitian, then the determination of T-1 is very simple. We choose the eigenvectors g(1),..., g(n) of A so that they are normalized by (g(i), g(i)) = 1 for each i, as well as orthogonal. Then it is easy to verify that T-1 = T*; in other words, the inverse of T is the same as its adjoint (the transpose of its complex conjugate).
Finally, we note that if A has fewer than n linearly independent eigenvectors, then there is no matrix T such that T-1AT = D. In this case, A is not similar to a diagonal matrix, and is not diagonalizable.
Consider the matrix
1 1 4 1
Find the similarity transformation matrix T and show that A can be diagonalized.
In Example 1 of Section 7.5 we found that the eigenvalues and eigenvectors of A are
r1 = 3,
Ã2 = - á® = ( J
Thus the transformation matrix T and its inverse T-1 are
Consequently, you can check that
Now let us turn again to the system
x' = Ax, (37)
where A is a constant matrix. In Sections 7.5 and 7.6 we have described how to solve such a system by starting from the assumption that x = ge^. Now we provide another viewpoint, one based on diagonalizing the coefficient matrix A.
According to the results stated just above, it is possible to diagonalize A whenever A has a full set of n linearly independent eigenvectors. Let g(1),..., g(n) be eigenvectors of A corresponding to the eigenvalues rv ..., rn and form the transformation matrix T