Download (direct link):
Usually, we will drop the arbitrary constant c when finding eigenvectors; thus instead of Eq. (34) we write
¦(1) (1) , (35)
and remember that any nonzero multiple of this vector is also an eigenvector. We say that x(1) is the eigenvector corresponding to the eigenvalue k1 2.
Chapter 7. Systems of First Order Linear Equations
Now setting X = -1 in Eq. (31), we obtain
^4 -Λ /0
, . . .nl. (36)
4 -1J \x2J \0J v
Again we obtain a single condition on xl and x2, namely, 4xx - x2 = 0. Thus the eigenvector corresponding to the eigenvalue X2 = 1 is
xw = ΓΛ ) . (37)
or any nonzero multiple of this vector.
As Example 4 illustrates, eigenvectors are determined only up to an arbitrary nonzero multiplicative constant; if this constant is specified in some way, then the eigenvectors are said to be normalized. In Example 4, we set the constant equal to 1, but any other nonzero value could also have been used. Sometimes it is convenient to normalize an eigenvector x by choosing the constant so that (x, x) = 1.
Equation (27) is a polynomial equation of degree n in X, so there are n eigenvalues Xj,... ,Xn, some of which may be repeated. If a given eigenvalue appears m times as a root ofEq. (27), then that eigenvalue is said to have multiplicity m. Each eigenvalue has at least one associated eigenvector, and an eigenvalue of multiplicity m may have q linearly independent eigenvectors, where
1 < q < m. (38)
Examples show that q may be any integer in this interval. If all the eigenvalues of a matrix A are simple (have multiplicity one), then it is possible to show that the n eigenvectors of A, one for each eigenvalue, are linearly independent. On the other hand, if A has one or more repeated eigenvalues, then there may be fewer than n linearly independent eigenvectors associated with A, since for a repeated eigenvalue we may have q < m .As we will see in Section 7.8, this fact may lead to complications later on in the solution of systems of differential equations.
Find the eigenvalues and eigenvectors of the matrix
0 1 1
A = ²1 0 1
1 1 0
The eigenvalues are the roots of the equation
-X 1 1
det(A - XI) = 1 -X 1
1 1 -X
= X3 + 3X + 2 = 0.
The eigenvalues X and eigenvectors x satisfy the equation (A - XI)x = 0, or
'-X 1 1
1 -X 1
1 1 -X
7.3 Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors 365
The roots of Eq. (41) are A. 1 = 2, A2 = 1, and A3 = 1. Thus 2 is a simple eigenvalue, and - 1 is an eigenvalue of multiplicity 2.
To find the eigenvector x(1) corresponding to the eigenvalue A1 we substitute A = 2 in Eq. (40); this gives the system
-2 1 1
1 -2 1
1 1 -2
We can reduce this to the equivalent system
2 -1 -1
0 1 -1
,0 0 0
by elementary row operations. Solving this system we obtain the eigenvector
= I 1
For A = 1, Eqs. (40) reduce immediately to the single equation
X1 + X2 + X3 = 0.
Thus values for two of the quantities x1, x2, x3 can be chosen arbitrarily and the third is determined from Eq. (45). For example, if x1 = 1 and x2 = 0, then x3 = -1, and
= I 0 1
is an eigenvector. Any nonzero multiple of x(2) is also an eigenvector, but a second independent eigenvector can be found by making another choice of x1 and x2; for instance, x1 = 0 and x2 = 1. Again x3 = -1 and
is an eigenvector linearly independent of x(2). Therefore in this example two linearly independent eigenvectors are associated with the double eigenvalue.
An important special class of matrices, called self-adjoint or Hermitian matrices, are those for which A* = A;thatis, a.i = a.. Hermitian matrices include as a subclass real symmetric matrices, that is, matrices that have real elements and for which AT = A. The eigenvalues and eigenvectors of Hermitian matrices always have the following useful properties:
1. All eigenvalues are real.
2. There always exists a full set of n linearly independent eigenvectors, regardless of the multiplicities of the eigenvalues.