Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Elementary Differential Equations and Boundary Value Problems - Boyce W.E.

Boyce W.E. Elementary Differential Equations and Boundary Value Problems - John Wiley & Sons, 2001. - 1310 p.
Download (direct link): elementarydifferentialequations2001.pdf
Previous << 1 .. 80 81 82 83 84 85 < 86 > 87 88 89 90 91 92 .. 609 >> Next

212y" + 31y' - y = 0, 1 > 0. (14)
Verify that the Wronskian of y2 and y2 is given by Eq. (13).
From the example just cited we know that W(y2, y2)(1) = (3/2)1-3/2. To use Eq. (13) we must write the differential equation (14) in the standard form with the coefficient of y" equal to 1. Thus we obtain
y" + 3 '---22 = 0,
21 21
so p(1) = 3/21. Hence
W(ӳ, 2)(1) = exp
/
3
d1 21
3
= exp ( - - ln 1
= 1-3/2. (15)
Equation (15) gives the Wronskian of any pair of solutions ofEq. (14). For the particular solutions given in this example we must choose = 3/2.
A stronger version of Theorem 3.3.1 can be established if the two functions involved are solutions of a second order linear homogeneous differential equation.
Let y2 and y2 be the solutions of Eq. (7),
L [y] = y" + p(1) + q (1) y = 0,
where p and q are continuous on an open interval I. Then y2 and y2 are linearly dependent on I if and only if W(y2, y2)(1) is zero for all 1 in I. Alternatively, y2 and y2 are linearly independent on I if and only if W (y2, y2)(1) is never zero in I.
Of course, we know by Theorem 3.3.2 that W (y2, y2)(1) is either everywhere zero or nowhere zero in I. In proving Theorem 3.3.3, observe first that if y2 and y2 are linearly
3.3 Linear Independence and the Wronskian
151
dependent, then W(y2, y2)(1) is zero for all 1 in I by Theorem 3.3.1. It remains to prove the converse; that is, if W(y2, y2)(1) is zero throughout I, then y2 and y2 are linearly dependent. Let t0 be any point in I; then necessarily W(y2, y2)(10) = 0. Consequently, the system of equations
for 2 and 2 has a nontrivial solution. Using these values of 2 and 2, let (1) = 2 ӳ(1 ) + 2 y2(1). Then is a solution ofEq. (7), and by Eqs. (16) also satisfies the initial conditions
Therefore, by the uniqueness part of Theorem 3.2.1, or by Example 2 of Section 3.2, (1) = 0 for all 1 in I. Since (1) = 2y2 (1) + 22(1) with 2 and 2 not both zero, this means that y2 and y2 are linearly dependent. The alternative statement of the theorem follows immediately.
We can now summarize the facts about fundamental sets of solutions, Wronskians, and linear independence in the following way. Let y2 and y2 be solutions of Eq. (7),
where p and q are continuous on an open interval I. Then the following four statements are equivalent, in the sense that each one implies the other three:
1. The functions y2 and y2 are a fundamental set of solutions on I.
2. The functions y2 and y2 are linearly independent on I.
3. W(y2, y2)(10) = 0 for some t0 in I.
4. W(ylt y2)(1) = 0 for all 1 in I.
It is interesting to note the similarity between second order linear homogeneous differential equations and two-dimensional vector algebra. Two vectors a and b are said to be linearly dependent if there are two scalars k2 and k2, not both zero, such that k2a + k2b = 0; otherwise, they are said to be linearly independent. Let i and j be unit vectors directed along the positive x and y axes, respectively. Since k2i + k2j = 0 only if k1 = k2 = 0, the vectors i and j are linearly independent. Further, we know that any vector a with components a2 and a2 can be written as a = a2i + a2j, that is, as a linear combination of the two linearly independent vectors i and j. It is not difficult to show that any vector in two dimensions can be expressed as a linear combination of any two linearly independent two-dimensional vectors (see Problem 14). Such a pair of linearly independent vectors is said to form a basis for the vector space of two-dimensional vectors.
The term vector space is also applied to other collections of mathematical objects that obey the same laws of addition and multiplication by scalars that geometric vectors do. For example, it can be shown that the set of functions that are twice differentiable on the open interval I forms a vector space. Similarly, the set V of functions satisfying Eq. (7) also forms a vector space.
Since every member of V can be expressed as a linear combination of two linearly independent members y2 and y2, we say that such a pair forms a basis for V. This leads to the conclusion that V is two-dimensional; therefore, it is analogous in many respects to the space of geometric vectors in a plane. Later we find that the set of solutions of an
1 1(10) + 22(10) = 0 Ӳ (10) + 2 2 (0> = 0
(16)
(10) = 0, '(10) = 0.
(17)
y" + p(1 )yr + q (1) = 0,
152
Chapter 3. Second Order Linear Equations
PROBLEMS
nth order linear homogeneous differential equation forms a vector space of dimension n, and that any set of n linearly independent solutions of the differential equation forms a basis for the space. This connection between differential equations and vectors constitutes a good reason for the study of abstract linear algebra.
Previous << 1 .. 80 81 82 83 84 85 < 86 > 87 88 89 90 91 92 .. 609 >> Next