Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Elementary differential equations 7th edition - Boyce W.E

Boyce W.E Elementary differential equations 7th edition - Wiley publishing , 2001. - 1310 p.
ISBN 0-471-31999-6
Download (direct link): elementarydifferentialequat2001.pdf
Previous << 1 .. 154 155 156 157 158 159 < 160 > 161 162 163 164 165 166 .. 486 >> Next

In each of Problems 15 through 24 find all eigenvalues and eigenvectors of the given matrix.
15.
17.
19.
5 -1
>3 1
2 1
v 1 2
' 1 V3
vV3 1
1 0 0
21. I 2 1 2|
V3 2 1
( 11/9 2/9 8/9
23. I 2/9 2/9 10/9
8/9 10/9 5/9
16.
18.
20.
22.
24.
3 2
4 1j
i ^
i 1
3 3/4
5 1
3 2
1 4
2 4
Problems 25 through 29 deal with the problem of solving Ax = b when det A = 0.
25. Suppose that, for a given matrix A, there is a nonzero vector x such that Ax = 0. Show that there is also a nonzero vector y such that A*y = 0.
26. Show that (Ax, y) = (x, A*y) for any vectors x and y.
27. Suppose that det A = 0 and that Ax = b has solutions. Show that (b, y) = 0, where y is
any solution of A*y = 0. Verify that this statement is true for the set of equations in
Example 2.
Hint: Use the result of Problem 26.
28. Suppose that det A = 0, and that x = x(0) is a solution of Ax = b. Show that if L is a
solution of A^ = 0 and a is any constant, then x = x(0) + aL, is also a solution of Ax = b.
29. Suppose that det A = 0 and that y is a solution of A*y = 0. Show that if (b, y) = 0 for every such y, then Ax = b has solutions. Note that this is the converse of Problem 27; the form of the solution is given by Problem 28.
30. Prove that k = 0 is an eigenvalue of A if and only if A is singular.
31. Prove that if A is Hermitian, then (Ax, y) = (x, Ay), where x and y are any vectors.
32. In this problem we show that the eigenvalues of a Hermitian matrix A are real. Let x be an eigenvector corresponding to the eigenvalue k.
(a) Show that (Ax, x) = (x, Ax). Hint: See Problem 31.
(b) Show that k(x, x) = k(x, x). Hint: Recall that Ax = kx.
(c) Show that k = k; that is, the eigenvalue k is real.
33. Show that if k1 and k2 are eigenvalues of a Hermitian matrix A, and if k1 = k2, then the
corresponding eigenvectors x(1) and x(2) are orthogonal.
Hint: Use the results of Problems 31 and 32 to show that (X1 X2)(x(1), x(2)) = 0.
368
Chapter 7. Systems of First Order Linear Equations
7.4 Basic Theory of Systems of First Order Linear Equations
The general theory of a system of n first order linear equations
x1 = pn (t)x1 + ... + P1n (t)xn + g1(t),
(1)
xn = Pn1(t)x1 + + Pnn (t)xn + gn(t)
closely parallels that of a single linear equation of nth order. The discussion in this section therefore follows the same general lines as that in Sections 3.2, 3.3, and 4.1. To discuss the system (1) most effectively, we write it in matrix notation. That is, we consider x1 = 01 (t),..., xn = 0n (t) to be components of a vector x = ^ (t); similarly, g1(t),..., gn(t) are components of a vector g(t), and p11(t),..., pnn(t) are elements of an n x n matrix P(t). Equation (1) then takes the form
X = P(t)x + g(t). (2)
The use of vectors and matrices not only saves a great deal of space and facilitates calculations but also emphasizes the similarity between systems of equations and single (scalar) equations.
A vector x = ^(t) is said to be a solution of Eq. (2) if its components satisfy the system of equations (1). Throughout this section we assume that P and g are continuous on some interval a < t < 3; that is, each of the scalar functions pn,..., pnn, gv ..., gn is continuous there. According to Theorem 7.1.2, this is sufficient to guarantee the existence of solutions of Eq. (2) on the interval a < t < 3.
It is convenient to consider first the homogeneous equation
X = P(t)x
(3)
obtained from Eq. (2) by setting g(t) = 0. Once the homogeneous equation has been solved, there are several methods that can be used to solve the nonhomogeneous equation (2); this is taken up in Section 7.9. We use the notation
x(1)(t) =
(xn(t )\ X21(t)
\Xn1(t
, x(k)(t) =
(X1k (t X2k(t )
\Xnk(t )'
(4)
to designate specific solutions of the system (3). Note that xij(t) = ) refers to
the ith component of the jth solution x(j^ t). The main facts about the structure of solutions of the system (3) are stated in Theorems 7.4.1 to 7.4.4. They closely resemble the corresponding theorems in Sections 3.2, 3.3, and 4.1; some of the proofs are left to the reader as exercises.
Theorem 7.4.1 If the vector functions x(1) and x(2) are solutions of the system (3), then the linear combination c1x(1) + c2x(2) is also a solution for any constants c1 and c2.
This is the principle of superposition; it is proved simply by differentiating c1x(1) + c2x(2) and using the fact that x(1) and x(2) satisfy Eq. (3). By repeated application of
7.4 Basic Theory of Systems of First Order Linear Equations
369
Theorem 7.4.2
Theorem 7.4.1 we reach the conclusion that if x(1),..., x(k) are solutions of Eq. (3), then
x = qx(1)(t) +-----------+ ckx(k)(t)
(5)
is also a solution for any constants c1t..., ck. As an example, it can be verified that
3t
x "(t ) = ^ satisfy the equation
According to Theorem 7.4.1
3t
x =
x(2)(t) =
1 1 4 1
2e
1
2
(6)
(7)
x=mD e3t+c4-2)e-t
= c1x(1)(t ) + c2x ^ (t )
(2) (
(8)
also satisfies Eq. (7).
As we indicated previously, by repeatedly applying Theorem 7.4.1, it follows that every finite linear combination of solutions of Eq. (3) is also a solution. The question now arises as to whether all solutions of Eq. (3) can be found in this way. By analogy with previous cases it is reasonable to expect that for a system of the form (3) of nth order it is sufficient to form linear combinations of n properly chosen solutions. Therefore let x(1),..., x(n) be n solutions of the nth order system (3), and consider the matrix X(t) whose columns are the vectors x(1) (t),..., x(n) (t):
Previous << 1 .. 154 155 156 157 158 159 < 160 > 161 162 163 164 165 166 .. 486 >> Next