The desire to predict the complex WSS random process based on the sample
by using a linear predictor
is expressed in
[1, p. 61 exercise 3.10].
It is asked to chose
to minimize the MSE or prediction error power
We are asked to find the optimal prediction parameter
and the minimum prediction error power by using the orthogonality principle.
Solution:
Using the orthogonality principle
[1, p.51, eq. 3.38] for the estimation of
translates into finding the
for which the observed data
will be normal to the error
Considering (
3), the mean squared error can be written as:
This result can also be obtained by the equations
[1, eq. 3.36, eq. 3.37]. They provide the solution for the optimal prediction parameter of a linear predictor
, that minimizes the MSE and the minimum MSE. Note that in the book
is used instead of
and
instead of
and
. This is only correct for a zero mean random process
, as it is assumed in the derivation of the formula in the book. The formula for the optimal coefficients is thus:
The minimum MSE is for a general signal
is equal to:
Translating the formulas to the notation of the exercise, we obtain
,
,
and
,
and for a zero mean process
.
The optimal prediction parameter
is thus given by:
while the minimum MSE is given by:
Which is equal to the solution that was obtained using the orthogonality principle (
4).
[1] Steven M. Kay: “Modern Spectral Estimation – Theory and Applications”, Prentice Hall, ISBN: 0-13-598582-X.
Leave a reply