In
[1, p. 61 exercise 3.11] it is asked to repeat problem
[1, p. 61 exercise 3.10] (see also the solution
[2] ) for the general case when the predictor is given as
Furthermore we are asked to show that the optimal prediction coefficients

are found by solving
[1, p. 157, eq. 6.4 ] and the minimum prediction error power is given by
[1, p. 157, eq. 6.5 ].
Solution:
The equation for determining the optimal prediction coefficients from
[1, p. 157, eq. 6.4 ] is given by:
whereas the minimum MSE is given by
[1, p. 157, eq. 6.5 ] as:
Using the orthogonality principle we have to obtain the coefficients

that are making the observed data
![x_[n-k] \; ,k=1,...,p](https://lysario.de/wp-content/cache/tex_138eb6ffdc0e7272e6f59fef3948a31c.png)
orthogonal to the error
![\hat{x}[n]-x[n]](https://lysario.de/wp-content/cache/tex_0e943b2a836cdeeef26e3e0e6cf98fda.png)
that is:
We see that this is the form of (
2), thus the first part of the exercise is solved.
The previous relation is a linear equation in the variables

, and setting
![\mathbf{r}_{xx}=\left[\begin{array}{cccc}r_{xx}[1] & r_{xx}[2] & ... & r_{xx}[p]\end{array}\right]^{T}](https://lysario.de/wp-content/cache/tex_49651c1a787eb79c1744c1dab34b0b80.png)
,
![\mathbf{\alpha}=\left[\begin{array}{cccc}\alpha_{1} & \alpha_{1}& ... & \alpha_{p} \end{array}\right]^{T}](https://lysario.de/wp-content/cache/tex_2ccaa04b690c2bc70bf93fcdf18d935b.png)
and
the linear equations can be written in matrix notation as:
The previous relation provides the solution for the optimum prediction parameters. The MSE for those parameters is
Let
![\mathbf{x}=\left[\begin{array}{ccc} x[n-1] & \cdots & x[n-p] \end{array}\right]^{T}](https://lysario.de/wp-content/cache/tex_a0cdf117e1c185c306fc3a3fa46dd720.png)
then because

equation (
7) can be written as:
Because

the mean squared error can be reduced to:
We note that the last formula was obtained by replacing

,
![r^{\ast}_{xx}[k]=r_{xx}[-k]](https://lysario.de/wp-content/cache/tex_6a9fc40b5c629caa1ac98343de3a37d0.png)
and writing out the resulting inner product. The last equation is the same as the one given at (
3) in
[1, p. 157, eq. 6.5 ]. Using the orthogonality principle the result can be found even faster because:
Applying the orthogonality principle to the last term of the previous equation we obtain
![E\left\{x^{\ast}[n-l]\left( x[n]+\sum_{k=1}^{p}\alpha_{k}x[n-k] \right)\right\} =0](https://lysario.de/wp-content/cache/tex_1baa3a56d3f0f1b89cf97cb291ec3061.png)
, and thus the MSE is equal to:
The previous equation is again the same as the one given at (
3) in
[1, p. 157, eq. 6.5 ]. We have thus proven using the orthogonality principle
[1, p. 157, eq. 6.4 ] and
[1, p. 157, eq. 6.5 ]. QED.
[1] Steven M. Kay: “Modern Spectral Estimation – Theory and Applications”, Prentice Hall, ISBN: 0-13-598582-X. [2] Chatzichrisafis: “Solution of exercise 3.10 from Kay’s Modern Spectral Estimation -
Theory and Applications”.
Leave a reply