The desire to predict the complex WSS random process based on the sample
![x[n-1]](https://lysario.de/wp-content/cache/tex_f14c06cc1753ec7fe07c98dd3f429390.png)
by using a linear predictor
is expressed in
[1, p. 61 exercise 3.10].
It is asked to chose
![\alpha_{1}](https://lysario.de/wp-content/cache/tex_42b65ba9cd07af0a2b5901a7a68770e9.png)
to minimize the MSE or prediction error power
We are asked to find the optimal prediction parameter
![\alpha_{1}](https://lysario.de/wp-content/cache/tex_42b65ba9cd07af0a2b5901a7a68770e9.png)
and the minimum prediction error power by using the orthogonality principle.
Solution:
Using the orthogonality principle
[1, p.51, eq. 3.38] for the estimation of
![x[n]](https://lysario.de/wp-content/cache/tex_d3baaa3204e2a03ef9528a7d631a4806.png)
translates into finding the
![\alpha_{1}](https://lysario.de/wp-content/cache/tex_42b65ba9cd07af0a2b5901a7a68770e9.png)
for which the observed data
![x[n-1]](https://lysario.de/wp-content/cache/tex_f14c06cc1753ec7fe07c98dd3f429390.png)
will be normal to the error
Considering (
3), the mean squared error can be written as:
This result can also be obtained by the equations
[1, eq. 3.36, eq. 3.37]. They provide the solution for the optimal prediction parameter of a linear predictor
![\hat{\theta}=-\sum_{i=0}^{N-1}\beta_{i}^{\ast}x_{i}](https://lysario.de/wp-content/cache/tex_a659d1681101f1e07db4359f1d484f1e.png)
, that minimizes the MSE and the minimum MSE. Note that in the book
![\mathbf{C}_{xx}](https://lysario.de/wp-content/cache/tex_a92c6609fa05ced32624185c92f71be3.png)
is used instead of
![\mathbf{R}_{xx}](https://lysario.de/wp-content/cache/tex_4a7420603d0000eda399f88bd3f45296.png)
and
![\sigma_{\theta},\sigma_{x}](https://lysario.de/wp-content/cache/tex_b625fe54f77c54c2a6c059eeeef49c60.png)
instead of
![r_{\theta \theta}[0]](https://lysario.de/wp-content/cache/tex_c0219a326d69241fe834eb6473323994.png)
and
![r_{xx}[0]](https://lysario.de/wp-content/cache/tex_92c5630642b279858745d05d4097b3fc.png)
. This is only correct for a zero mean random process
![x[n]](https://lysario.de/wp-content/cache/tex_d3baaa3204e2a03ef9528a7d631a4806.png)
, as it is assumed in the derivation of the formula in the book. The formula for the optimal coefficients is thus:
The minimum MSE is for a general signal
![x[n]](https://lysario.de/wp-content/cache/tex_d3baaa3204e2a03ef9528a7d631a4806.png)
is equal to:
Translating the formulas to the notation of the exercise, we obtain
![\theta =x[n]](https://lysario.de/wp-content/cache/tex_ebceea97dfa3ecda473bc59100754274.png)
,
![\mathbf{x}=x[n-1]](https://lysario.de/wp-content/cache/tex_fe89bc16cb2f5f0ba3baf853dd3118a7.png)
,
![\beta_{1}=\alpha^{\ast}_{1}](https://lysario.de/wp-content/cache/tex_12b9be52d29324e4491bc15a23339eb3.png)
and
![\mathbf{r}_{\theta x}=E\left\{\mathbf{x}\theta^{H}\right\}=E\left\{x^{\ast}[n]x[n-1]\right\}=r_{xx}[-1]](https://lysario.de/wp-content/cache/tex_fea96e545397b9f1724990fedcc117e0.png)
,
![\mathbf{R}_{xx}=E\left\{ x^{\ast}[n-1]x[n-1]\right\}=r_{xx}[0]](https://lysario.de/wp-content/cache/tex_b0865d4540df90fe1b90e211dd8a9da5.png)
and for a zero mean process
![r_{xx}[0]=\sigma_{\theta}^{2}=\sigma_{x}^{2}](https://lysario.de/wp-content/cache/tex_23b9fa5525a8d3c398f88495687c81bb.png)
.
The optimal prediction parameter
![\beta_{1}=\alpha^{\ast}_{1}](https://lysario.de/wp-content/cache/tex_12b9be52d29324e4491bc15a23339eb3.png)
is thus given by:
while the minimum MSE is given by:
Which is equal to the solution that was obtained using the orthogonality principle (
4).
[1] Steven M. Kay: “Modern Spectral Estimation – Theory and Applications”, Prentice Hall, ISBN: 0-13-598582-X.
Leave a reply