The desire to predict the complex WSS random process based on the sample 
![x[n-1]](https://lysario.de/wp-content/cache/tex_f14c06cc1753ec7fe07c98dd3f429390.png)
 by using a linear predictor 
 is expressed in 
[1, p. 61 exercise 3.10].
It is asked to chose 

 to minimize the MSE or prediction error power 
 
We are asked to find the optimal prediction parameter 

 and the minimum prediction error power by using the orthogonality principle. 
Solution:
Using the orthogonality principle 
[1, p.51, eq. 3.38] for the estimation of 
![x[n]](https://lysario.de/wp-content/cache/tex_d3baaa3204e2a03ef9528a7d631a4806.png)
 translates into finding the 

 for which the observed data 
![x[n-1]](https://lysario.de/wp-content/cache/tex_f14c06cc1753ec7fe07c98dd3f429390.png)
 will be normal to the error 
 
 
 Considering (
3), the mean squared error can be written as: 
 
This result can also be obtained by the equations 
[1, eq. 3.36, eq. 3.37]. They provide the solution for the optimal prediction parameter of  a linear predictor 

 , that minimizes the MSE and the minimum MSE. Note that in the book 

 is used instead of 

 and 

 instead of 
![r_{\theta \theta}[0]](https://lysario.de/wp-content/cache/tex_c0219a326d69241fe834eb6473323994.png)
 and 
![r_{xx}[0]](https://lysario.de/wp-content/cache/tex_92c5630642b279858745d05d4097b3fc.png)
 . This is only correct for a zero mean random process 
![x[n]](https://lysario.de/wp-content/cache/tex_d3baaa3204e2a03ef9528a7d631a4806.png)
, as it is assumed in the derivation of the formula in the book. The formula for the optimal coefficients is thus:  
The minimum MSE is for a general signal 
![x[n]](https://lysario.de/wp-content/cache/tex_d3baaa3204e2a03ef9528a7d631a4806.png)
 is equal to: 
Translating the formulas to the notation of the exercise, we obtain 
![\theta =x[n]](https://lysario.de/wp-content/cache/tex_ebceea97dfa3ecda473bc59100754274.png)
,
![\mathbf{x}=x[n-1]](https://lysario.de/wp-content/cache/tex_fe89bc16cb2f5f0ba3baf853dd3118a7.png)
, 

 and 
![\mathbf{r}_{\theta x}=E\left\{\mathbf{x}\theta^{H}\right\}=E\left\{x^{\ast}[n]x[n-1]\right\}=r_{xx}[-1]](https://lysario.de/wp-content/cache/tex_fea96e545397b9f1724990fedcc117e0.png)
, 
![\mathbf{R}_{xx}=E\left\{ x^{\ast}[n-1]x[n-1]\right\}=r_{xx}[0]](https://lysario.de/wp-content/cache/tex_b0865d4540df90fe1b90e211dd8a9da5.png)
 and for a zero mean process 
![r_{xx}[0]=\sigma_{\theta}^{2}=\sigma_{x}^{2}](https://lysario.de/wp-content/cache/tex_23b9fa5525a8d3c398f88495687c81bb.png)
. 
The optimal prediction parameter 

 is thus given by: 
 while the minimum MSE is given by: 
 Which is equal to the solution that was obtained using the  orthogonality principle (
4). 
 [1] Steven M. Kay: “Modern Spectral Estimation – Theory and Applications”, Prentice Hall, ISBN: 0-13-598582-X.
Leave a reply