In [1, p. 61 exercise 3.7] we are asked to find the MLE of \mu_{x} and  \sigma_x^2. for the conditions of Problem [1, p. 60 exercise 3.4] (see also [2, solution of exercise 3.4]). We are asked if the MLE of the parameters are asymptotically unbiased , efficient and Gaussianly distributed.
Solution:
The p.d.f of the observations \mathbf{x}=\left[ \begin{array}{cccc}  x_{1}  & x_{2}  &  ...  &  x_{ n} \end{array} \right] is given by
f(\mathbf{x})=\frac{1}{\left(\sqrt{2\pi}\right)^{N} |\mathbf{C_{xx}}|^{1/2}} e^{-\frac{1}{2}\left(\mathbf{x}-\mathbf{\mu}_{x}\right)^{T}\mathbf{C^{-1}_{xx}}\left(\mathbf{x}-\mathbf{\mu}_{x} \right)}

With \mathbf{C_{xx}}=diag_{N}(\sigma^{2}_{x}, ... ,\sigma^{2}_{x}) and \mathbf{C^{-1}_{xx}}=diag_{N}(\frac{1}{\sigma^{2}_{x}}, ... ,\frac{1}{\sigma^{2}_{x}}) . Thus the determinant is given by |\mathbf{C_{xx}}|=\sigma^{2N}_{x}. Furthermore we can simplify
-\frac{1}{2}\left(\mathbf{x}-\mathbf{\mu}_{x}\right)^{T}\mathbf{C^{-1}_{xx}}\left(\mathbf{x}-\mathbf{\mu}_{x} \right) = -\frac{1}{2}\sum_{i=1}^{N}\frac{1}{\sigma_{x}^{2}}\left(x_{i}-\mu_{x}\right)^{2}=
=-\frac{1}{2\sigma_{x}^{2}}\sum_{i=1}^{N}\left(x_{i}-\mu_{x}\right)^{2} For a given measurement \mathbf{x}=\mathbf{x^{\prime}} we obtain the likelihood function:
f(\mathbf{x^{\prime}},\hat{\mu}_{x},  \hat{\sigma}^{2}_{x})=\frac{1}{\left(\sqrt{2\pi}\right)^{N}\hat{\sigma}_{x}^{N}}e^{-\frac{1}{2\hat{\sigma}_{x}^{2}}\sum\limits_{i=1}^{N}\left(x^{\prime}_{i}-\hat{\mu}_{x}\right)^{2}} (1)

Obviously the previous equation is positive and thus the estimator of the mean \hat{\mu}_{x} which will maximize the probability of the observation \mathbf{x}^{\prime} will provide a local maximum at the points where the derivative in respect to \hat{\mu}_{x} will be zero. Because the function is positive we can also use the natural logarithm of the likelihood function (the log-likelihood function) in order to obtain the maximum likelihood of \mu_{x}. This was already derived in [3, relation (3)]:
\frac{\partial \ln(f(\mathbf{x^{\prime}},\hat{\mu}_{x}, \hat{\sigma}^{2}_{x}))}{\partial \hat{\mu}_{x}}=0
\frac{\partial \left(-\frac{N}{2}\ln(2\pi \sigma^{2}_x)-\frac{1}{2}\sum\limits_{i=0}^{N-1}\left(\frac{x^{\prime}[i]-\hat{\mu}_x}{\sigma_x}\right)^2\right)}{\partial \hat{\mu}_{x}}=0
\frac{1}{\sigma_{x}^{2}}\sum\limits_{i=0}^{N-1}\left(x^{\prime}[i]-\hat{\mu}_x\right)=0
N \cdot \hat{\mu}_x=\sum\limits_{i=0}^{N-1}\left(x^{\prime}[i]\right)

From the previous relation we derive the maximum likelihood estimator of \hat{\mu}_{x}:
\hat{\mu}_x=\frac{\sum\limits_{i=0}^{N-1}\left(x^{\prime}[i]\right)}{N} (2)

With the same reasoning we may obtain the maximum likelihood estimator of the variance \hat{\sigma}^{2}_{x}:
\frac{\partial \ln(f(\mathbf{x^{\prime}},\hat{\mu}_{x}, \hat{\sigma}^{2}_{x}))}{\partial \hat{\sigma}^{2}_{x}}=0
\frac{\partial \left(-\frac{N}{2}\ln(2\pi \hat{\sigma}^{2}_x)-\frac{1}{2}\sum\limits_{i=0}^{N-1}\left(\frac{x^{\prime}[i]-\hat{\mu}_x}{\hat{\sigma}_x}\right)^2\right)}{\partial \hat{\sigma}^{2}_{x}}=0
-\frac{N}{2\hat{\sigma}_{x}^{2}}+\frac{1}{\hat{\sigma}^{4}}\sum\limits_{i=0}^{N-1}\left(x^{\prime}[i]-\hat{\mu}_{x}\right)^{2}=0

Again solving for \hat{\sigma}^{2}_{x} will give us the maximum likelihood estimator of the variance of the random process:
\hat{\sigma}_{x}^{2}=\frac{1}{N}\sum\limits_{i=0}^{N-1}\left(x^{\prime}[i]-\hat{\mu}_{x}\right)^{2}  (3)

The mean of the maximum likelihood estimator of the variance can be easily obtained using [4, relation (8)] and noting that the maximum likelihood estimator of the variance is \frac{N-1}{N} times the variance estimator that was used in [1, p. 61 exercise 3.6] (see also [4, solution of exercise 3.6]):
E\left\{ \hat{\sigma}_{x}^{2} \right\}=\frac{N-1}{N} \sigma^{2}_{x} (4)

The variance of the maximum likelihood estimator of the variance can be obtained by analogy to [4] by noting that
\frac{N\hat{\sigma}^2_x}{\sigma^2_x} \sim \chi^2_{N-1}.

From the previous relation we can obtain the variance of the maximum likelihood variance estimator by:
Var\left\{\hat{\sigma}^{2}_{x}\right\}=Var\left\{\frac{\sigma_{x}^{2}}{N}\chi^2_{N-1}\right\}
=\frac{\sigma_{x}^{4}}{N^{2}} Var\left\{\chi^2_{N-1} \right \}
=\frac{\sigma_{x}^{4}}{N^{2}} 2 (N-1)
=\frac{2 \sigma_{x}^{4}(N-1)}{N^{2}} (5)

By using the results from this exercise and [2], [4] we can summarize the properties of the MLE estimators \hat{\mu}_{x}, \hat{\sigma}_{x}^{2} in the following table:
\begin{array}{lcc}
MLE  & \hat{\mu}_x  =  \frac{\sum_{i=0}^{N-1}\left(x^{\prime}[i]\right)}{N} &  \hat{\sigma}_{x}^{2}=\frac{1}{N}\sum_{i=0}^{N-1}\left(x^{\prime}[i]-\hat{\mu}_{x}\right)^{2} \\
\hline 
 mean & \mu_{x} & \frac{N-1}{N} \sigma^{2}_{x}  \\
\hline
variance &\frac{\sigma^{2}_{x}}{N} & \frac{2 \sigma_{x}^{4}(N-1)}{N^{2}}  \\
\hline
CR-bound &  \frac{\sigma^{2}_{x}}{N}  & \frac{2\sigma_{x}^{4}}{N}  \\
\hline
distribution & \sim N(\mu_{x},\sigma^{2}_{x}) &  \sim \chi^{2}_{N-1}
\end{array}

From the previous table it is evident that for large N the mean values of the estimators match the true mean values. The same is true for the variances that asymptotically tend to zero – the same limit the CR – bound attains for N \rightarrow \infty . The MLE estimator of the mean is Gaussianly distributed even for small N, while by the central limit theorem [5, p. 622] the MLE estimator of the variance \hat{\sigma}_{x}^{2} (N \rightarrow \infty ) is also asymptotically distributed Gaussianly. So the answer to the question if the parameters are asymptotically unbiased, efficient and Gaussianly distributed is yes. QED.

[1] Steven M. Kay: “Modern Spectral Estimation – Theory and Applications”, Prentice Hall, ISBN: 0-13-598582-X.
[2] Chatzichrisafis: “Solution of exercise 3.4 from Kay’s Modern Spectral Estimation - Theory and Applications”.
[3] Chatzichrisafis: “Solution of exercise 3.5 from Kay’s Modern Spectral Estimation - Theory and Applications”.
[4] Chatzichrisafis: “Solution of exercise 3.6 from Kay’s Modern Spectral Estimation - Theory and Applications”.
[5] Granino A. Korn and Theresa M. Korn: “Mathematical Handbook for Scientists and Engineers”, Dover, ISBN: 978-0-486-41147-7.