Download (direct link):
We start the derivation with the expression for PF given in (454). Motivated by our result in the bound derivation (458), we choose s so that
Ks) = y-
Then (454) becomes
PF = Ã e-sXpXs(X) dX. (474)
This can be written as
PF = e'("-'i(i) Ã e+sli(s)-xlpXs(X) dX. (475)
The term outside is just the bound in (461). We now use a central limit theorem argument to evaluate the integral. First define a standardized variable:
ä x, - E(xs) _ Xs - ix(s)
(Var [*.])* VW) Substituting (476) into (475), we have
PF = euW-si(s) J” e~sVu(s)Ypy(Y) dY.
124 2.7 Performance Bounds and Approximations
In many cases the probability density governing ã is such that ó approaches a Gaussian random variable as N (the number of components of r) approaches infinity.! A simple case in which this is true is the case in which the rx are independent, identically distributed random variables with finite means and variances. In such cases, ó approaches a zero-mean Gaussian random variable with unit variance and the integral in (477) can be evaluated by substituting the limiting density.
The approximation arises because ó is only approximately Gaussian for
It is easy to verify that the approximate expression in (480) can also be obtained by letting
Looking at Fig. 2.39, we see that this is valid when the exponential function decreases to a small value while Y « 1.
In exactly the same manner we obtain
Observe that the exponents in (480) and (483) are identical to those obtained by using the Chernoff bound. The central limit theorem argument has provided a multiplicative factor that will be significant in many of the applications of interest to us.
g-.vW 1 e~<Y2,2) dY = es2'Ms>12 erfc* (sVfiis)). (478)
PF — jexp |*i(j) - sfx(s) + I erfc* [sVHs)l (479)
finite N. For values of > 3 we can approximate erfc*(-) by the
upper bound in (71). Using this approximation,
Py(Y) ~ py(0) ~
V2tt(1 - s)2fi(s)
exp [/ô) + (1 - s)il(s)], s < 1. (483)
t An excellent discussion is contained in Feller , pp. 517-520.
For the case in which Pr (º) is the criterion and the hypotheses are equally likely we have
Pr(e) = + ±PM
= ³ exp |ìÎò) + Sj~ Ä(5ò)] erfc* [smVji(sm)]
+ ³ exp ^(jm) + ^ 2‘?m'> /i(sm)] erfc* [(1 - sm)VjiW], (484)
where sm is defined in the statement preceding (472) [i.e., fi(sm) = 0 = ó]. When both smV> 3 and (1 — jm)Vjii(ym) > 3, this reduces to
Pr - nn ¦¦( \\l, ¥1--------------vi exP Æò)- (485)
[2(2nn(sm))/2sm(l - 5m)] 4 7
We now consider several examples to illustrate the application of these ideas. The first is one in which the exact performance is known. We go
126 2.7 Performance Bounds and Approximations
through the bounds and approximations to illustrate the manipulations involved.
Example 1. In this example we consider the simple Gaussian problem first introduced on p. 27:
/*,ßî(Í|ß.) = Ï -jL- exp (-§)• (487)
Then, using (449)
/ ÷ ³ Ã® A 1 Ã (Pi - m)2s + Ri2(1 _ J)1 jn />IOO 4
^ = 1,1 J.. ••• J-.n^^P [-----------2?-]"*• <488a>
Because all the integrals are identical,
, ÷ r 1 Ã (R ~ mYs + R2( 1 - s)
n(s) = Win exp-----------'.....................
J - oo V 2tt a L
Integrating we have
j dR. (4886)
' 2a2 ~ 2
where d2 was defined in the statement after (64). The curve is shown in Fig. 2.40:
(2s - 1 )d2
Using the bounds in (470), we have
/V * exp (=*?)
n ^ r (1 - s)2d21
Pm < exp ---------------j------J
0 < J < 1.
Fig. 2.40 fi(5) for Gaussian variables with unequal means.
Because /(R) is the sum of Gaussian random variables, the expressions in (479) and (482) are exact. Evaluating /i(s), we obtain
These expressions are identical to (64) and (68) (let s = (In rj)ld2 + i).