Download (direct link):
and solved by means of elementary row operations starting from the augmented matrix
1 2-4 10^
We proceed as in Examples 1 and 2.
(a) Add (-2) times the first row to the second row, and add the first row to the third row.
1 2 -4
0 -3 9
,0 5 -15
Chapter 7. Systems ofFirst Order Linear Equations
(b) Divide the second row by -3; then add (-5) times the second row to the third row.
'1 2 -4 | 0'
0 1 -3 | 0
,0 0 0 10,
Thus we obtain the equivalent system
Cj + 2c2 - 4c3 = 0, c2 - 3c3 = 0.
From the second of Eqs. (23) we have c2 = 3c3, and from the first we obtain q =
4c3 - 2c2 = —2c3.
Thus we have solved for C1 and
c2 in terms of c3,
with the latter
remaining arbitrary. If we choose c3 = -1 for convenience, then q = 2 and c2 = -3. In this case the desired relation (20) becomes
2x(1) - 3x(2) — x(3) = 0.
Frequently, it is useful to think of the columns (or rows) of a matrix A as vectors. These column (or row) vectors are linearly independent if and only if det A = 0. Further, if C = AB, then it can be shown that det C = (det A)(det B). Therefore, if the columns (or rows) of both A and B are linearly independent, then the columns (or rows) of C are also linearly independent.
Now let us extend the concepts of linear dependence and independence to a set of vector functions x(1)(t),..., x(k)(t) defined on an interval a < t < â. The vectors x(1) (t),, x(k) (t) are said to be linearly dependent on a < t < â if there exists a set of constants c1t..., ck, not all of which are zero, such that cxx(1) (t) + ••• + ck x(k) (t) = 0 for all t in the interval. Otherwise, x(1)(t),..., x(k)(t) are said to be linearly independent. Note that if x(1) (t),..., x(k) (t) are linearly dependent on an interval, they are linearly dependent at each point in the interval. However, if x(1)(t),..., x(k)(t) are linearly independent on an interval, they may or may not be linearly independent at each point; they may, in fact, be linearly dependent at each point, but with different sets of constants at different points. See Problem 14 for an example.
Eigenvalues and Eigenvectors. The equation
Ax = y (24)
can be viewed as a linear transformation that maps (or transforms) a given vector x
into a new vector y. Vectors that are transformed into multiples of themselves are
important in many applications.4 To find such vectors we set y = Xx, where X is a scalar proportionality factor, and seek solutions of the equations
Ax = Xx, (25)
(A - XI)x = 0. (26)
4For example, this problem is encountered in finding the principal axes of stress or strain in an elastic body, and in finding the modes of free vibration in a conservative system with a finite number of degrees of freedom.
7.3 Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors 363
The latter equation has nonzero solutions if and only if k is chosen so that
A(k) — det(A - kl) — 0. (27)
Values of ê that satisfy Eq. (27) are called eigenvalues of the matrix A, and the nonzero solutions of Eq. (25) or (26) that are obtained by using such a value of ê are called the eigenvectors corresponding to that eigenvalue.
If A is a 2 x 2 matrix, then Eq. (26) has the form
and Eq. (27) becomes
A(k) — (éö — k)(a22 — k) — ^12^21 — 0. (29)
The following example illustrates how eigenvalues and eigenvectors are found.
Find the eigenvalues and eigenvectors of the matrix
A — U -2>. (30)
The eigenvalues k and eigenvectors ¦ satisfy the equation (A — kI)x — 0, or
3 - k -1 VxA /0'
4 -2 - k) \x2J \0)'
The eigenvalues are the roots of the equation
det(A - kI) —
3 - k -1
4 -2- k
— k2 - k - 2 — 0. (32)
Thus the eigenvalues are k1 — 2 and k2 — -1.
To find the eigenvectors we return to Eq. (31) and replace k by each of the eigenvalues in turn. For k — 2 we have
1 —4) ©—(!!)¦ (33>
Hence each row of this vector equation leads to the condition x1 — x2 — 0, so x1 and x2 are equal, but their value is not determined. If x1 — c, then x2 — c also and the
eigenvector ¦(1) is
¦(1) — c ^ 1^ , c — 0. (34)