In [1, p. 61 exercise 3.8] we are asked to prove that the sample mean is a sufficient statistic for the mean under the conditions of [1, p. 61 exercise 3.4]. Assuming that \sigma^{2}_{x} is known. We are asked to find the MLE of the mean by maximizing p(\hat{\mu}_{x},\mu_{x}).
Solution: By definition [1, p. 48] a sufficient statistic \hat{\mu}_{x} of \mu_{x} if the conditional probability density function  p(\mathbf{x} |\hat{\mu}_{x}) does not depend on \mu_{x}. By the Neyman-Fisher factorization theorem the statistic will be sufficient if and only if it is possible to write the PDF as:
p(\mathbf{x},\mu_{x}) = g(\hat{\mu}_{x},\mu_{x})h(\mathbf{x}) (1)

The joint p.d.f is given by:
p(\mathbf{x},\mu_{x})=\prod_{i=0}^{N-1}\frac{1}{ \sqrt{2\pi}\sigma_{x} }e^{-\frac{1}{2}\left(\frac{x_{i}-\mu_{x}}{\sigma_{x}}\right)^{2}}
=\frac{1}{ \sqrt{2\pi}^{N}\sigma^{N}_{x} } \cdot e^{-\frac{1}{2\sigma^{2}_{x}}\left(\sum\limits_{i=0}^{N-1}x^{2}_{i}-2\sum\limits_{i=0}^{N-1}x_{i}\mu_{x}+N\mu_{x}^{2}\right)}
=\frac{1}{ \sqrt{2\pi}^{N}\sigma^{N}_{x} } \cdot e^{-\frac{N}{2\sigma^{2}_{x}}\left(\frac{\sum\limits_{i=0}^{N-1}x^{2}_{i}}{N}-2\mu_{x} \frac{\sum\limits_{i=0}^{N-1}x_{i}}{N}+\mu_{x}^{2}\right)}
=e^{\frac{1}{\sigma^{2}_{x}}\mu_{x} \hat{\mu}_{x}} \cdot e^{-\frac{N}{2\sigma^{2}_{x}}\mu_{x}^{2}} \frac{1}{ \sqrt{2\pi}^{N}\sigma^{N}_{x} } e^{-\frac{N}{2\sigma^{2}_{x}}\frac{\sum\limits_{i=0}^{N-1}x^{2}_{i}}{N}} (2)

Setting
g(\hat{\mu}_{x},\mu_{x})=e^{\frac{N}{\sigma^{2}_{x}}\mu_{x} \hat{\mu}_{x}}e^{-\frac{N}{2\sigma^{2}_{x}}\mu_{x}^{2}}

and
h(\mathbf{x})=\frac{1}{ \sqrt{2\pi}^{N}\sigma^{N}_{x} } e^{-\frac{N}{2\sigma^{2}_{x}}\frac{\sum\limits_{i=0}^{N-1}x^{2}_{i}}{N}}
=\frac{1}{ \sqrt{2\pi}^{N}\sigma^{N}_{x} } e^{-\frac{N}{2\sigma^{2}_{x}}\frac{\mathbf{x}^{T}\mathbf{x} }{N}}

we see at once that the equation (2) has the form of the Neyman-Fisher factorization theorem (1), thus \hat{\mu_{x}} is sufficient. The MLE of \mu_{x} for the measurement \mathbf{x} = \mathbf{x^{\prime}} is obtained by:
\frac{\partial p(\mathbf{x^{\prime}},\mu_{x}) }{\partial \mu_{x}}=0
h(\mathbf{x^{\prime}}) \frac{\partial  g(\hat{\mu}_{x},\mu_{x}) }{\partial\mu_{x}}=0
\left( \frac{N}{\sigma^{2}_{x}} \hat{\mu}_{x}  - \frac{N}{\sigma^{2}_{x}}\mu_{x} \right) g(\hat{\mu}_{x},\mu_{x})=0  \Rightarrow (3)
\mu_{x}=\hat{\mu}_{x}. (4)

Thus the MLE estimator of \mu_{x} is equal to the sample mean \hat{\mu}_{x} as already obtained in [2]. QED.

[1] Steven M. Kay: “Modern Spectral Estimation – Theory and Applications”, Prentice Hall, ISBN: 0-13-598582-X.
[2] Chatzichrisafis: “Solution of exercise 3.7 from Kay’s Modern Spectral Estimation - Theory and Applications”.