Download (direct link):
(x, x) = ? xixi = Y,X |2. (18)
The nonnegative quantity (x, x)1/2, often denoted by ||x||, is called the length, or magnitude, of x. If (x, y) = 0, then the two vectors x and y are said to be orthogonal. For example, the unit vectors i, j, k of three-dimensional vector geometry form an orthogonal set. On the other hand, if some of the elements of x are not real, then the matrix product
= E x2
may not be a real number. For example, let
-2 Ä + i)
xTy = (i)(2 - i) + (-2)(i) + (1 + i)(3) = 4 + 3i,
(x, y) = (i)(2 + i) + (-2)(-i) + (1 + i)(3) = 2 + 7i,
xTx = () + (—2)2 + (1 + i )2 = 3 + 2i,
(x, x) = (i)(-i) + (-2)(-2) + (1 + i)(1 - i) = 7.
S. Identity. The multiplicative identity, or simply the identity matrix I, is given by
1 0 ¦ ¦ 0\
0 1 ¦ ¦ 0
0 0 ¦ ¦ \)
Chapter 7. Systems of First Order Linear Equations
From the definition of matrix multiplication we have
AI = IA = A (21)
for any (square) matrix A. Hence the commutative law does hold for square matrices if one of the matrices is the identity.
9. Inverse. The matrix A is said to be nonsingular or invertible if there is another
matrix B such that AB = I and BA = I, where I is the identity. If there is such a B, it
can be shown that there is only one. It is called the multiplicative inverse, or simply the inverse, of A, and we write B = A-1. Then
AA-1 = A-1 A = I. (22)
Matrices that do not have an inverse are called singular or noninvertible.
There are various ways to compute A-1 from A, assuming that it exists. One way involves the use of determinants. Associated with each element aij of a given matrix is the minor Mj, which is the determinant of the matrix obtained by deleting the ith row and jth column of the original matrix, that is, the row and column containing aij.. Also associated with each element aij is the cofactor Cj defined by the equation
Cij = (-1)i+jMj. (23)
If B = A-1, then it can be shown that the general element bj is given by
While Eq. (24) is not an efficient way3 to calculate A-1, it does suggest a condition
that A must satisfy for it to have an inverse. In fact, the condition is both necessary and
sufficient: A is nonsingular if and only if det A = 0. If det A = 0, then A is singular.
Another and usually better way to compute A-1 is by means of elementary row operations. There are three such operations:
1. Interchange of two rows.
2. Multiplication of a row by a nonzero scalar.
3. Addition of any multiple of one row to another row.
Any nonsingular matrix A can be transformed into the identity I by a systematic sequence of these operations. It is possible to show that if the same sequence of operations is then performed on I, it is transformed into A-1. The transformation of a matrix by a sequence of elementary row operations is referred to as row reduction or Gaussian elimination. The following example illustrates the calculation of an inverse matrix in this way.
3For large n the number of multiplications required to evaluate A-1 by Eq. (24) is proportional to n!. If one uses more efficient methods, such as the row reduction procedure described later, the number of multiplications is proportional only to n3. Even for small values of n (such as n = 4), determinants are not an economical tool in calculating inverses, and row reduction methods are preferred.
7.2 Review of Matrices
Find the inverse of
1 -1 -1
A = | 3 -1 2
V2 2 3
The matrix A can be transformed into I by the following sequence of operations. The result of each step appears below the statement.
(a) Obtain zeros in the off-diagonal positions in the first column by adding (-3) times the first row to the second row and adding (-2) times the first row to the third row.
'1 -1 -1N
0 2 5
,0 4 5,
Obtain a 1 in the diagonal position in the second column by multiplying the second row by 2.
/1 -1 0 1