Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Detection, Estimation modulation theory part 1 - Vantress H.

Vantress H. Detection, Estimation modulation theory part 1 - Wiley & sons , 2001. - 710 p.
ISBN 0-471-09517-6
Download (direct link): sonsdetectionestimati2001.pdf
Previous << 1 .. 208 209 210 211 212 213 < 214 > 215 216 217 218 219 220 .. 254 >> Next

The importance of these results should not be underestimated because they lead to solutions that can be evaluated easily with numerical techn-niques. We develop these techniques in greater detail in Part II and use them to solve various problems.
5. In Chapter 4 we discussed whitening filters for the problem of detecting signals in colored noise. In the initial discussion we did not require realizability. When we examined the infinite interval stationary process case (p. 312), we determined that a realizable filter could be found and one component interpreted as an optimum realizable estimate of the colored noise. A similar result can be derived for the finite interval nonstationary case (see Problem 6.6.5). This enables us to use state-variable techniques to find the whitening filter. This result will also be valuable in Chapter
II.3.
6.7 PROBLEMS
P6.1 Properties of Linear Processors Problem 6.1.1. Let
r(t) = a(t) + n(t\ Tt<t< 7>,
where a(t) and n(t) are uncorrelated Gaussian zero-mean processes with covariance functions Ka(t, ) and Kn(t,u), respectively. Find < t < 7.
Problem 6.1.2. Consider the model in Fig. 6.3.
1. Derive Property 3V (51).
2. Specialize (51) to the case in which d(/) = x(f).
Problem 6.1.3.
Consider the vector model in Fig. 6.3.
Prove that
h0(f, I) R(0 = 5p(0 CT(t).
Properties of Linear Processors 587
Comment. Problems 6.1.4 to 6.1.9 illustrate cases in which the observation is a finite set of random variables. In addition, the observation noise is zero. They illustrate the simplicity that (29) leads to in linear estimation problems.
Problem 6,1.4, Consider a simple prediction problem. We observe ait) at a single time. The desired signal is
1. Find the best linear MMSE estimate of d(t).
2. What is the mean-square error ?
3. Specialize to the case () = e_fc|T|.
4. Show that, for the correlation function in part 3, the MMSE estimate would not change if the entire past were available.
5. Is this true for any other correlation function? Justify your answer.
Problem 6,1,5, Consider the following interpolation problem. You are given the values a(0) and a(T)\
1. Find the MMSE estimate of a(t).
2. What is the resulting mean-square error ?
3. Evaluate for t = T/2.
4. Consider the special case, Ka(r) e~k{T], and evaluate the processor constants
Problem 6,1,6 [55]. We observe a(t) and a(t). Let d(t) = a(t + a), where a is a positive constant.
1. Find the MMSE linear estimate of d(t).
2. State the conditions on () for your answer to be meaningful.
3. Check for small a.
Problem 6,1,7 [55]. We observe a(0) and a(t). Let
1. Find the MMSE linear estimate of d(t).
2. Check your result for t 1.
Problem 6,1,8, Generalize the preceding model to n + 1 observations; a(0), a(t). a(2t) a(nt).
d(t) = a(t + a), where a is a positive constant. Assume that
Ela(t)) = 0,
E[a(t) a(u)} = Ka(t - u) A Ka(r).
] = 0,
CO < t < CO,
E[a(t)a(u)] = KJj u), oo < /, < .
1. Find the equations which specify the optimum linear processor.
2. Find an explicit solution for nt 1.
588 6.7 Problems
Problem 6,1.9. [55]. We want to reconstruct a(t) from an infinite number of samples; a(nT), n = 1, 0, + 1,..using a MMSE linear estimate:
a(t) = J c*(0
n= - 00
1. Find an expression that the coefficients cn(t) must satisfy.
2. Consider the special case in which
Sa(o>) = 0 H > ^
Evaluate the coefficients.
3. Prove that the resulting mean-square error is zero. (Observe that this proves the sampling theorem for random processes.)
Problem 6.1.10. In (29) we saw that
E[e0(t) r(u)] = 0, ҳ < < Tf.
1. In our derivation we assumed 0(/, ) was continuous and defined h0(t, ҳ) and h0(t, Tf) by the continuity requirement. Assume r(u) contains a white noise component. Prove
,
E[e0(t)r(Tf)} 0.
2. Now remove the continuity assumption on h0(t, ) and assume r(u) contains a white noise component. Find an equation specifying an /?0(/, w), such that
E[e0(t)r(u)] = 0, ҳ < < Tf.
Are the mean-square errors for the filters in parts 1 and 2 the same? Why?
3. Discuss the implications of removing the white noise component from r(u). Will h0(t, u) be continuous? Do we use strict or nonstrict inequalities in the integral equation ?
P6.2 Stationary Processes, Infinite Past, (Wiener Filters)
Realizable and Unrealizable Filtering
Previous << 1 .. 208 209 210 211 212 213 < 214 > 215 216 217 218 219 220 .. 254 >> Next