In
[1, p. 61 exercise 3.11] it is asked to repeat problem
[1, p. 61 exercise 3.10] (see also the solution
[2] ) for the general case when the predictor is given as
Furthermore we are asked to show that the optimal prediction coefficients
are found by solving
[1, p. 157, eq. 6.4 ] and the minimum prediction error power is given by
[1, p. 157, eq. 6.5 ].
Solution:
The equation for determining the optimal prediction coefficients from
[1, p. 157, eq. 6.4 ] is given by:
whereas the minimum MSE is given by
[1, p. 157, eq. 6.5 ] as:
Using the orthogonality principle we have to obtain the coefficients
that are making the observed data
orthogonal to the error
that is:
We see that this is the form of (
2), thus the first part of the exercise is solved.
The previous relation is a linear equation in the variables
, and setting
,
and
the linear equations can be written in matrix notation as:
The previous relation provides the solution for the optimum prediction parameters. The MSE for those parameters is
Let
then because
equation (
7) can be written as:
Because
the mean squared error can be reduced to:
We note that the last formula was obtained by replacing
,
and writing out the resulting inner product. The last equation is the same as the one given at (
3) in
[1, p. 157, eq. 6.5 ]. Using the orthogonality principle the result can be found even faster because:
Applying the orthogonality principle to the last term of the previous equation we obtain
, and thus the MSE is equal to:
The previous equation is again the same as the one given at (
3) in
[1, p. 157, eq. 6.5 ]. We have thus proven using the orthogonality principle
[1, p. 157, eq. 6.4 ] and
[1, p. 157, eq. 6.5 ]. QED.
[1] Steven M. Kay: “Modern Spectral Estimation – Theory and Applications”, Prentice Hall, ISBN: 0-13-598582-X. [2] Chatzichrisafis: “Solution of exercise 3.10 from Kay’s Modern Spectral Estimation -
Theory and Applications”.
Leave a reply