in black and white
Main menu
Share a book About us Home
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics

Elementary Differential Equations and Boundary Value Problems - Boyce W.E.

Boyce W.E. Elementary Differential Equations and Boundary Value Problems - John Wiley & Sons, 2001. - 1310 p.
Download (direct link): elementarydifferentialequations2001.pdf
Previous << 1 .. 196 197 198 199 200 201 < 202 > 203 204 205 206 207 208 .. 609 >> Next

28. Suppose that det A = 0, and that x = x(0) is a solution of Ax = b. Show that if ^ is a
solution of A^ = 0 and a is any constant, then x = x(0) + a^ is also a solution of Ax = b.
29. Suppose that det A = 0 and that y is a solution of A*y = 0. Show that if (b, y) = 0 for every such y, then Ax = b has solutions. Note that this is the converse of Problem 27; the form of the solution is given by Problem 28.
30. Prove that A = 0 is an eigenvalue of A if and only if A is singular.
31. Prove that if A is Hermitian, then (Ax, y) = (x, Ay), where x and y are any vectors.
32. In this problem we show that the eigenvalues of a Hermitian matrix A are real. Let x be an eigenvector corresponding to the eigenvalue A.
(a) Show that (Ax, x) = (x, Ax). Hint: See Problem 31.
(b) Show that A(x, x) = A(x, x). Hint: Recall that Ax = Ax.
(c) Show that A = A; that is, the eigenvalue A is real.
33. Show that if Aj and A2 are eigenvalues of a Hermitian matrix A, and if Aj = A2, then the corresponding eigenvectors x(1) and x(2) are orthogonal.
Hint: Use the results of Problems 31 and 32 to show that (Aj - A2)(x(1), x(2)) = 0.
Chapter 7. Systems ofFirst Order Linear Equations
7.4 Basic Theory of Systems of First Order Linear Equations
The general theory of a system of n first order linear equations
x1 = P11 (t)x1 + + P1n (t)xn + g\(t),
xn = Pn1(t)x1 + + Pnn (t)xn + gn(t)
closely parallels that of a single linear equation of nth order. The discussion in this section therefore follows the same general lines as that in Sections 3.2, 3.3, and 4.1. To discuss the system (1) most effectively, we write it in matrix notation. That is, we consider x1 = 1(t),..., xn = (t) to be components of a vector x = ); similarly,
g1(t),..., gn(t) are components of a vector g(t), and P11(t),..., Pnn(t) are elements of an n x n matrix P(t). Equation (1) then takes the form
x' = P(t)x + g(t). (2)
The use of vectors and matrices not only saves a great deal of space and facilitates calculations but also emphasizes the similarity between systems of equations and single (scalar) equations.
A vector x = ^) is said to be a solution ofEq. (2) if its components satisfy the system of equations (1). Throughout this section we assume that P and g are continuous on some interval a < t < ; that is, each ofthe scalar functions p11, ..., Pnn, gv ..., gn is continuous there. According to Theorem 7.1.2, this is sufficient to guarantee the existence of solutions ofEq. (2) on the interval a < t < .
It is convenient to consider first the homogeneous equation
x' = P(t)x
obtained from Eq. (2) by setting g(t) = 0. Once the homogeneous equation has been solved, there are several methods that can be used to solve the nonhomogeneous equation (2); this is taken up in Section 7.9. We use the notation
x(1)(t) =
(xn(t )\ x21(t)
, x(k)(t) =
^x1k(t x2k(t)
to designate specific solutions of the system (3). Note that xij(t) = x(j)(t) refers to the i th component of the j th solution x(j j)(t). The main facts about the structure of solutions ofthe system (3) are stated in Theorems 7.4.1 to 7.4.4. They closely resemble the corresponding theorems in Sections 3.2, 3.3, and 4.1; some ofthe proofs are left to the reader as exercises.
Theorem 7.4.1 If the vector functions x(1) and x(2) are solutions of the system (3), then the linear combination c1x(1) + c2x(2) is also a solution for any constants c1 and c2.
This is the principle of superposition; it is proved simply by differentiating c1x(1) + c2x(2) and using the fact that x(1) and x(2) satisfy Eq. (3). By repeated application of
7.4 Basic Theory of Systems of First Order Linear Equations
Theorem 7.4.2
Theorem 7.4.1 we reach the conclusion that if x(1),..., x(k) are solutions of Eq. (3), then
x = C1x(1)(t) + --- + Ck x(k)(t)
is also a solution for any constants c1t..., ck. As an example, it can be verified that
x "(t) = '2e3t satisfy the equation
According to Theorem 7.4.1
x(2)(t) =
1 1 4 1
= c1x(1)(t) + c2x^ (t)
also satisfies Eq. (7).
As we indicated previously, by repeatedly applying Theorem 7.4.1, it follows that every finite linear combination of solutions of Eq. (3) is also a solution. The question now arises as to whether all solutions of Eq. (3) can be found in this way. By analogy with previous cases it is reasonable to expect that for a system of the form (3) of nth order it is sufficient to form linear combinations of n properly chosen solutions. Therefore let x(1),..., x(n) be n solutions of the nth order system (3), and consider the
matrix X(t) whose columns are the vectors x(1) (t),..., x(n) (t):
Previous << 1 .. 196 197 198 199 200 201 < 202 > 203 204 205 206 207 208 .. 609 >> Next