Author: Panagiotis
29
Jan
In
[1, p. 61 exercise 3.11] it is asked to repeat problem
[1, p. 61 exercise 3.10] (see also the solution
[2] ) for the general case when the predictor is given as
Furthermore we are asked to show that the optimal prediction coefficients

are found by solving
[1, p. 157, eq. 6.4 ] and the minimum prediction error power is given by
[1, p. 157, eq. 6.5 ].
read the conclusion >
Author: Panagiotis
23
Jan
The desire to predict the complex WSS random process based on the sample
![x[n-1]](https://lysario.de/wp-content/cache/tex_f14c06cc1753ec7fe07c98dd3f429390.png)
by using a linear predictor
is expressed in
[1, p. 61 exercise 3.10].
It is asked to chose

to minimize the MSE or prediction error power
We are asked to find the optimal prediction parameter

and the minimum prediction error power by using the orthogonality principle.
read the conclusion >
In
[1, p. 61 exercise 3.9] we are asked to consider the real linear model
and find the MLE of the slope

and the intercept

by assuming that
![z[n]](https://lysario.de/wp-content/cache/tex_8a8c996b9e9d1294c8f815911479257f.png)
is real white Gaussian noise with mean zero and variance

.
Furthermore it is requested to find the MLE of

if in the linear model we set

.
read the conclusion >
Author: Panagiotis
22
Sep
In
[1, p. 61 exercise 3.8] we are asked to prove that the sample mean is a sufficient statistic for the mean under the conditions of
[1, p. 61 exercise 3.4].
Assuming that

is known. We are asked to find the MLE of the mean by maximizing

.
read the conclusion >
Author: Panagiotis
25
Apr
In
[1, p. 61 exercise 3.7] we are asked to find the MLE of

and

.
for the conditions of Problem
[1, p. 60 exercise 3.4] (see also
[2, solution of exercise 3.4]).
We are asked if the MLE of the parameters are asymptotically unbiased , efficient and Gaussianly distributed.
read the conclusion >