Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Detection, Estimation modulation theory part 1 - Vantress H.

Vantress H. Detection, Estimation modulation theory part 1 - Wiley & sons , 2001. - 710 p.
ISBN 0-471-09517-6
Download (direct link): sonsdetectionestimati2001.pdf
Previous << 1 .. 25 26 27 28 29 30 < 31 > 32 33 34 35 36 37 .. 254 >> Next

In general, the right-hand side cannot be written in the form required by (187), and
therefore an unbiased efficient estimate does not exist.
The likelihood equation is
<>
If the range of j04) includes (1/A0 2= Ru a solution exists:
*[<WR)] = . (208)
1 = 1
If (208) can be satisfied, then
<5m,(R) = )' (209)
70 2.4 Estimation Theory
[Observe that (209) tacitly assumes that exists. If it does not, then even in the absence of noise we shall be unable to determined unambiguously. If we were designing a system, we would always choose an s( ) that allows us to find A unambiguously in the absence of noise.] If the range of s(a) does not include (I IN) 2fL Rh the maximum is at an end point of the range.
We see that the maximum likelihood estimate commutes over nonlinear operations. (This is not true for MS or MAP estimation.) If it is unbiased, we evaluate the bound on the variance by differentiating (206):
g2 In WRI/I) lfrA_ S(A)] _ [Md)l2. /210>
8A2 on2 l* ^,J 8 2 on2 L SA J (210)
Observing that
E[r{ - 5(d)] = E(nt) = 0, (211)
we obtain the following bound for any unbiased estimate,
Var [(R) - A] > >- (212)
We see that the bound is exactly the same as that in Example 2 except for a factor
[8s(A)/dA]2. The intuitive reason for this factor and also some feeling for the con-
ditions under which the bound will be useful may be obtained by inspecting the typical function shown in Fig. 2.22. Define
Y=s(A). (213)
Then
= Y + . (214)
The variance in estimating Y is just on2IN. However, if ye, the error in estimating Y, is small enough so that the slope is constant, then
-)\ (215)

U = a(R)
Fig. 2.22 Behavior of error variance in the presence of small errors.
Nonrandom Parameter Estimation 71
and
Var (ee) s
- Var (*)
~ n = [ds(A)ldA]2 N[ds(A)l dA]2
(216)
We observe that if ye is large there will no longer be a simple linear relation between y and a. This tells us when we can expect the Cramer-Rao bound to give an accurate answer in the case in which the parameter enters the problem in a nonlinear manner. Specifically, whenever the estimation error is small, relative to A d2s(A)ldA2> we should expect the actual variance to be close to the variance bound given by the Cramer-Rao inequality.
The properties of the ML estimate which are valid when the error is small are generally referred to as asymptotic. One procedure for developing them formally is to study the behavior of the estimate as the number of independent observations N approaches infinity. Under reasonably general conditions the following may be proved (e.g., Cramer [9], pp. 500-504).
1. The solution of the likelihood equation (177) converges in probability to the correct value of A as N->oo. Any estimate with this property is called consistent. Thus the ML estimate is consistent.
2. The ML estimate is asymptotically efficient; that is,
3. The ML estimate is asymptotically Gaussian, N(A, aae).
These properties all deal with the behavior of ML estimates for large N. They provide some motivation for using the ML estimate even when an efficient estimate does not exist.
At this point a logical question is: Do better estimation procedures than the maximum likelihood procedure exist? Certainly if an efficient estimate does not exist, there may be unbiased estimates with lower variances. The difficulty is that there is no general rule for finding them. In a particular situation we can try to improve on the ML estimate. In almost all cases, however, the resulting estimation rule is more complex, and therefore we emphasize the maximum likelihood technique in all of our work with real variables.
A second logical question is: Do better lower bounds than the Cramer-Rao inequality exist? One straightforward but computationally tedious procedure is the Bhattacharyya bound. The Cramer-Rao bound uses d2pT\a(R\A)ldA2. Whenever an efficient estimate does not exist, a larger bound which involves the higher partial derivatives can be obtained. Simple derivations are given in [13] and [14] and in Problems 2.4.23-24. For the cases of interest to us the computation is too involved to make the bound of much practical value. A second bound is the Barankin bound
Var [ami(R) - A]
Previous << 1 .. 25 26 27 28 29 30 < 31 > 32 33 34 35 36 37 .. 254 >> Next