# Elementary differential equations 7th edition - Boyce W.E

ISBN 0-471-31999-6

**Download**(direct link)

**:**

**154**> 155 156 157 158 159 160 .. 486 >> Next

cij = E%v (9)

k=1

350

Chapter 7. Systems of First Order Linear Equations

EXAMPLE

1

By direct calculation it can be shown that matrix multiplication satisfies the associative law

(AB)C = A(BC) (10)

and the distributive law

A(B + C) = AB + AC. (11)

However, in general, matrix multiplication is not commutative. For both products AB and BA to exist and to be of the same size, it is necessary that A and B be square

matrices of the same order. Even in that case the two products are usually unequal, so

that, in general

AB = BA. (12)

To illustrate the multiplication of matrices, and also the fact that matrix multiplication is not necessarily commutative, consider the matrices

/1 -2 A /2 1 -1

A = I 0 2 -1 | , B = I 1 -1 0

\2 1 1/ \2 -11

From the definition of multiplication given in Eq. (9) we have

/2 - 2 + 2 1 + 2 - 1 -1 + 0 + A

AB = 10 + 2 - 2 0 - 2 + 1 0 + 0 - 1 |

\4 + 1 + 2 2 - 1 - 1 -2 + 0 + 1/

2 2 0

0 -1 -1

7 0 -1

Similarly, we find that

0 -3 0

II 3 -4 2

\4 -5 4

Clearly, AB = BA.

7. Multiplication of Vectors. Matrix multiplication also applies as a special case if the matrices A and B are 1 x n and n x 1 row and column vectors, respectively. Denoting these vectors by xT and y we have

xTy = txy ? (13)

i=1

This is the extension to n dimensions of the familiar dot product from physics and calculus. The result of Eq. (13) is a (complex) number, and it follows directly from Eq. (13) that

xTy = yTx, xT (y + z) = xTy + xT z, (ax) Ty = a(xTy) = xT (ay). (14)

7.2 Review of Matrices

351

There is another vector product that is also defined for any two vectors having the same number of components. This product, denoted by (x, y), is called the scalar or inner product, and is defined by

(x, y) = J2 xiYi,

(15)

1= 1

The scalar product is also a (complex) number, and by comparing Eqs. (13) and (15) we see that

(x, y) = xTy.

(16)

Thus, if all of the elements of y are real, then the two products (13) and (15) are identical. From Eq. (15) it follows that

(x, y) = (y, x), (ax, y) = a(x, y),

(x, y + z) = (x, y) + (x, z), (x,ay) = a(x, y).

(17)

Note that even if the vector x has elements with nonzero imaginary parts, the scalar product of x with itself yields a nonnegative real number,

n n

(x, x) = J2 x1xi = \xt \2. (18)

1=1 1=1

The nonnegative quantity (x, x)1/2, often denoted by ||x||, is called the length, or magnitude, of x. If (x, y) = 0, then the two vectors x and y are said to be orthogonal. For example, the unit vectors i, j, k of three-dimensional vector geometry form an orthogonal set. On the other hand, if some of the elements of x are not real, then the matrix product

T

xx

(19)

1 =1

may not be a real number. For example, let

x

1

-2 ,1 + i)

y=

2-1

1

3

Then

xTy = (i)(2 - i) + (-2)(i) + (1 + i)(3) = 4 + 31,

(x, y) = (i)(2 + i) + (-2)(-i) + (1 + i)(3) = 2 + 71, xTx = (i)2 + (—2)2 + (1 + i )2 = 3 + 21,

(x, x) = (i)(-i) + (-2)(-2) + (1 + i)(1 - i) = 7.

1. Identity. The multiplicative identity, or simply the identity matrix I, is given by

(20)

1 0 ? ? 0\

0 1 ? ? 0

0 0 ? ? \)

I

352

Chapter 7. Systems of First Order Linear Equations

From the definition of matrix multiplication we have

AI = IA = A (21)

for any (square) matrix A. Hence the commutative law does hold for square matrices if one of the matrices is the identity.

9. Inverse. The matrix A is said to be nonsingular or invertible if there is another

matrix B such that AB = I and BA = I, where I is the identity. If there is such a B, it

can be shown that there is only one. It is called the multiplicative inverse, or simply the inverse, of A, and we write B = A—1. Then

AA—1 = A—1A = I. (22)

Matrices that do not have an inverse are called singular or noninvertible.

There are various ways to compute A—1 from A, assuming that it exists. One way

involves the use of determinants. Associated with each element of a given matrix is

the minor My, which is the determinant of the matrix obtained by deleting the ith row and jth column of the original matrix, that is, the row and column containing aij. Also associated with each element aij is the cofactor Cy defined by the equation

Cij = (— 1)i+jMij. (23)

If B = A—1, then it can be shown that the general element by is given by

bj=dCjA ? (24)

While Eq. (24) is not an efficient way3 to calculate A—1, it does suggest a condition that A must satisfy for it to have an inverse. In fact, the condition is both necessary and sufficient: A is nonsingular if and only if det A = 0. If det A = 0, then A is singular.

Another and usually better way to compute A—1 is by means of elementary row operations. There are three such operations:

1. Interchange of two rows.

2. Multiplication of a row by a nonzero scalar.

3. Addition of any multiple of one row to another row.

Any nonsingular matrix A can be transformed into the identity I by a systematic sequence of these operations. It is possible to show that if the same sequence of operations is then performed on I, it is transformed into A—1. The transformation of a matrix by a sequence of elementary row operations is referred to as row reduction or Gaussian elimination. The following example illustrates the calculation of an inverse matrix in this way.

**154**> 155 156 157 158 159 160 .. 486 >> Next