Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Combinatorial Optimization - Cristofides N.

Cristofides N. Combinatorial Optimization - Wiley publishing , 2012. - 212 p.
Download (direct link): сombinatorialoptimi2012.pdf
Previous << 1 .. 24 25 26 27 28 29 < 30 > 31 32 33 34 35 36 .. 156 >> Next

(3.1) for large values of m. The first use of the subgradient method (Held and Karp, 1970, 1971) was in this kind of problem, even though theoretical background can be found earlier in Agmon (1954) and Motzkin and Schoenberg (1954). A number of computerized examples have appeared in Held, Wolfe, and Crowder (1974), while improvements on the method can be found in Crowder (1974) and Sandi (1976).
In Section 3.2 a description of the method is given, together with mathematical contents, numerical examples, and illustrative plots for bi-dimensional problems.
In Section 3.3 some computational results are given, related to situations
3.1.1 and 3.1.2, about the use of the subgradient method for the solution of
78
Claudio Sandi
linear inequality systems, while situation 3.1.3 is illustrated by approaching the travelling salesman problem in terms of subgradient procedures.
Finally in Section 3.4 a few comments are given on positive aspects, difficulties encountered, and possible improvements of the subgradient optimization.
3.2 The Subgradient Method
As pointed out in Section 3.1, the subgradient method is a numerical technique, generally used to solve (3.1), i.e.
Before describing the method, and for illustrative purposes, a simple example is given in Section 3.2.1; then definitions, properties, and details of the subgradient method are given in Sections 3.2.2 and 3.2.3.
3.2.1 An Example
Consider the two-dimensional problem:
(max/(x) x=(x1,x2)eE2
A plot and some contour lines of the concave and piecewise linear function fix) are given in Figures 3.1 and 3.2, respectively.
X
(3.14)
f(x) = min {(xt + x2 -12); (9- Xj); (9 - x2)}.
v
\
Figure 3.1 Plot of f(x) as defined in (3.14)
Subgradient Optimization
79
Figure 3.2 Some contour lines of f(x) as defined in (3.14)
From Figure 3.2, E2 is divided by the rays VA, VB, VC into three disjoint regions Si, S2, S3 such that:
(9-Xj) xeS,
/(x) = (9-x2) xeS2
(x!+x2 12) xeS3
while on the disjoint rays VA, VB, VC, we have:
f(x) =
(xi + x2-12) or (9-xt) xeVA
(x! + x2 12) or (9 x2) xeVB
(9-xJ or (9-x2) xeVC
and finally:
f (x) = (xj + x2-12) or (9-Xj) or (9-x2)
x=V
which characterizes the max f(x).
Inside Sl5 S2, S3, f(x) is differentiable, since it coincides with (linear and therefore) differentiable functions; this is not true on the rays VA, VB, VC, which separate different values of V/ as shown in Figure 3.3.
However, the directional derivative along an arbitrary unit vector d, i.e.
f (x, d) = lim (f(x + td) -f(x))/t
l0'
(3.15)
80
Claudio Sandi
V
X
a2
Figure 3.3 Some representations of a, = [V/]xeS. (t = 1, 2, 3)
exists for all x, d e E2, and gives a measure of the rate of change of /(x) along d. Let
a,=[V/]XeSi (i = 1,2,3)
and let J(x) be the index set of the regions S, adjacent to or containing the point x, i.e.
I(x) = {i | /(x) = (OjX - b,), (i = 1, 2, 3)} and consider the set A(x) of active gradients in x, i.e.
A(x) = {n( | ie J(x)}.
It is easily shown that:
f(x, d) = min{aid | a, e A(x)} (3.16)
and therefore f(x, d) is simply the (length of the) minimal component along
d of the active gradients of /(x) at x. Figure 3.4 tries to visualize f(x, d) for
x e S3 and x e VC and indicated d.
Property (3.16) may obviously be generalized as follows:
f(x, d) = min {ad | a A(x)} (3.17)
Subgradient Optimization
81
Figure 3.4 f'(x, d) = |PP'l, for x=PeS3 and d=PP7|PP'|. f'(x, d) = |CC'|, for x^C and d=CC71CC|
where A(x) is the set of all the convex combinations of active gradients of f(x) in X, i.e.
A(x) = |a I a = ? X = 1; A* ^ol.
t I il(l) iel(x) >
Property (3.17) is illustrated in Figure 3.5: 1(C) = {1, 2}; A(C) ={ai, a2} is
*2
Figure 3.5 Convex combination of active gradients and directional derivatives
82
Claudio Sandi
the set of the active gradients in C; d is an arbitrary direction; A(C) = {CC'lC'eQQ,}; |CH|=sad =?|CK|, aeA(C); f(C,d) = \CU\ = a2d.
Peculiar to the element of A(x) is the property
/(y)-/(x)a(y-x)
for all x, y eEn and aeA(x), as will be shown in the following (A(x) is usually called the subdifferential, while each aeA(x) is a subgradient of f(x) at x).
3.2.2 General Properties
After this illustrative example, let us return to the general problem (3.1) and recall some definitions and general properties, referring to Rockafellar (1970) for a complete treatment of the subject. For the sake of simplicity, we suppose S = En and /(x) bounded above; the extension to the general case can be found in Held, Wolfe, and Crowder (1974) and Oettli (1972) and does not involve particular difficulties.
Definitions
I(x) = {i | f(x) = (aix-bi), iel}
A(x) ={Oj | iel(x)}
A(x) = ja I a = ? Z Aj = l;A.j^o]
^ I iei(x) ief(x) ?*
Proposition As defined in (3.1), fix) is concave. In fact:
f(z) = akz-bk kel(z) (3.18a)
f(x)^akx-bk (3.18b)
f(y)^aky~bk (3.18c)
and, if z = Ax + (1 -A)y and 0^A 1, the sum of (3.18b) and (3.18c), with weights A and (1A) respectively, gives:
A/(x) + (1 - A)/(y)/(Ax + (1 - A)y) Vx,y e En
Previous << 1 .. 24 25 26 27 28 29 < 30 > 31 32 33 34 35 36 .. 156 >> Next