Download (direct link):
2x(1) — 3x(2) — x(3) = 0.
Frequently, it is useful to think of the columns (or rows) of a matrix A as vectors. These column (or row) vectors are linearly independent if and only if detA = 0. Further, if C = AB, then it can be shown that det C = (det A)(det B). Therefore, if the columns (or rows) of both A and B are linearly independent, then the columns (or rows) of C are also linearly independent.
Now let us extend the concepts of linear dependence and independence to a set of vector functions x(1)(t),..., x(k)(t) defined on an interval a < t < j3. The vectors x(1) (t),, x(k) (t) are said to be linearly dependent on a < t < j3 if there exists a set of constants c1t..., ck, not all of which are zero, such that c1x(1) (t) + ••• + ckx(k) (t) = 0 for all t in the interval. Otherwise, x(1)(t),..., x(k)(t) are said to be linearly independent. Note that if x(1) (t),..., x(k) (t) are linearly dependent on an interval, they are linearly dependent at each point in the interval. However, if x(1)(t),..., x(k)(t) are linearly independent on an interval, they may or may not be linearly independent at each point; they may, in fact, be linearly dependent at each point, but with different sets of constants at different points. See Problem 14 for an example.
Eigenvalues and Eigenvectors. The equation
Ax = y (24)
can be viewed as a linear transformation that maps (or transforms) a given vector x
into a new vector y. Vectors that are transformed into multiples of themselves are
important in many applications.4 To find such vectors we set y = kx, where k is a scalar proportionality factor, and seek solutions of the equations
Ax = kx, (25)
(A — kI)x = 0. (26)
4For example, this problem is encountered in finding the principal axes of stress or strain in an elastic body, and in finding the modes of free vibration in a conservative system with a finite number of degrees of freedom.
7.3 Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors 363
The latter equation has nonzero solutions if and only if X is chosen so that
A(X) = det(A - XI) = 0. (27)
Values of X that satisfy Eq. (27) are called eigenvalues of the matrix A, and the nonzero solutions of Eq. (25) or (26) that are obtained by using such a value of X are called the eigenvectors corresponding to that eigenvalue.
If A is a 2 x 2 matrix, then Eq. (26) has the form
and Eq. (27) becomes
A(X) = (^ii — X)(a22 — X) — ^12^21 = 0. (29)
The following example illustrates how eigenvalues and eigenvectors are found.
Find the eigenvalues and eigenvectors of the matrix
A = U —2> • (30)
The eigenvalues X and eigenvectors x satisfy the equation (A — XI)x = 0, or
3 — x —i V*i\ = M
4 —2—v W W'
The eigenvalues are the roots of the equation
det(A — XI) =
3 — X —1 4 -2- X
= X2 — X — 2 = 0^ (32)
Thus the eigenvalues are X1 = 2 and X2 = — 1.
To find the eigenvectors we return to Eq. (31) and replace X by each of the eigenvalues in turn. For X = 2 we have
1 —4) ©=(0)- (33)
Hence each row of this vector equation leads to the condition x1 - x2 = 0, so x1 and x2 are equal, but their value is not determined. If x1 = c, then x2 = c also and the eigenvector x(1) is
x(1) = c ^^ , c = 0. (34)
Usually, we will drop the arbitrary constant c when finding eigenvectors; thus instead of Eq. (34) we write
x(1) = (1) , (35)
and remember that any nonzero multiple of this vector is also an eigenvector. We say that x(1) is the eigenvector corresponding to the eigenvalue X1 = 2.
Chapter 7. Systems of First Order Linear Equations
Now setting k = -1 in Eq. (31), we obtain
/4 -A /x,\ /0
, . . .nl. (36)
4 -1J \xj \0J v ’
Again we obtain a single condition on x1 and x2, namely, 4x1 - x2 = 0. Thus the eigenvector corresponding to the eigenvalue k2 = — 1 is
it-(2) — /' 1
= (;)' (37)
or any nonzero multiple of this vector.
As Example 4 illustrates, eigenvectors are determined only up to an arbitrary nonzero multiplicative constant; if this constant is specified in some way, then the eigenvectors are said to be normalized. In Example 4, we set the constant equal to 1, but any other nonzero value could also have been used. Sometimes it is convenient to normalize an eigenvector x by choosing the constant so that (x, x) = 1.
Equation (27) is a polynomial equation of degree n in k, so there are n eigenvalues k17... ,kn, some of which may be repeated. If a given eigenvalue appears m times as a root of Eq. (27), then that eigenvalue is said to have multiplicity m. Each eigenvalue has at least one associated eigenvector, and an eigenvalue of multiplicity m may have q linearly independent eigenvectors, where
1 < q < m. (38)
Examples show that q may be any integer in this interval. If all the eigenvalues of a matrix A are simple (have multiplicity one), then it is possible to show that the n eigenvectors of A, one for each eigenvalue, are linearly independent. On the other hand, if A has one or more repeated eigenvalues, then there may be fewer than n linearly independent eigenvectors associated with A, since for a repeated eigenvalue we may have q < m .As we will see in Section 7.8, this fact may lead to complications later on in the solution of systems of differential equations.