In [1, p. 62 exercise 3.17] we are asked to verify that the variance of the sample mean estimator for the mean of a real WSS random process
\frac{1}{2M+1}\sum\limits_{n=-M}^{n=M}x[n]

is given by [1, eq. (3.60), p. 58]. For the case when x[n] is real white noise we are asked to what the variance expression does reduce to. A hint that is given is to use the relationship from [1, eq. (3.64), p. 59].
Solution: Let’s reproduce the corresponding equations cited in the problem statement, so [1, eq. (3.60), p. 58] is given by
\lim\limits_{M\rightarrow \infty}\frac{1}{2M+1} \sum\limits_{k=-2M}^{2M}\left(1-\frac{|k|}{2M+1}\right)c_{xx}[k]=0.

while [1, eq. (3.64), p. 59] is given by:
\sum\limits_{m=-M}^{M}\sum\limits_{n=-M}^{M}g[m-n]=\sum\limits_{k=-2M}^{2M} \left(2M+1-|k|\right)g[k]

First we note that the mean of the sample mean converges to the true mean of the WSS random process x[n]:
E\left\{ \hat{\mu}_{x} \right\}=\frac{1}{2M+1}\sum\limits_{n=-M}^{M} E\left\{ x[n]\right\}
=\frac{1}{2M+1} (2M+1)\mu_{x}
=\mu_{x}.

From this we can derive the variance of the \hat{\mu}_{x} as:
E\left\{ (\hat{\mu}_{x}-\mu_{x})^{2}\right\}=E\left\{\hat{\mu}_{x}^{2}\right\}-2\mu_{x} E\left\{\hat{\mu}_{x}\right\}+\mu_{x}^{2}
=E\left\{\hat{\mu}_{x}^{2}\right\}-\mu_{x}^{2}

The squared sample mean can be written as:
\hat{\mu}_{x}^{2}=\left(\frac{1}{2M+1}\right)^{2} \sum\limits_{n=-M}^{n=M} x[n] \sum\limits_{l=-M}^{l=M}x[l]
=\left(\frac{1}{2M+1}\right)^{2}\sum\limits_{n=-M}^{n=M}\sum\limits_{l=-M}^{l=M}x[n]x[l], \; \textnormal{replacing } l=n+k \Rightarrow
=\left(\frac{1}{2M+1}\right)^{2}\sum\limits_{n=-M}^{n=M}\sum\limits_{k=-M-n}^{k=M-n}x[n]x[n+k].

From the previous relation we can derive that the mean squared sample mean is given by:
E\left\{\hat{\mu}_{x}^{2}\right\}=\left(\frac{1}{2M+1}\right)^{2}\sum\limits_{n=-M}^{n=M}\sum\limits_{k=-M-n}^{k=M-n}\ensuremath{E\left\{x[n]x[n+k]\right\}}
=\left(\frac{1}{2M+1}\right)^{2}\sum\limits_{n=-M}^{n=M}\sum\limits_{k=-M-n}^{k=M-n}r_{xx}[k]  (1)
=\left(\frac{1}{2M+1}\right)^{2}\sum\limits_{n=-M}^{n=M}\sum\limits_{k=-2M}^{2M}r_{xx}[k]
 - \left(\frac{1}{2M+1}\right)^{2}\sum\limits_{n=-M}^{n=M}(1-\delta(n-M))\sum\limits_{k=-2M}^{-M-n-1}r_{xx}[k]
 - \left(\frac{1}{2M+1}\right)^{2}\sum\limits_{n=-M}^{n=M}(1-\delta(n+M))\sum\limits_{k=M-n+1}^{2M}r_{xx}[k]  (2)

In the previous relations the factors (1-\delta(n-M))=1-\delta_{nM} and (1-\delta(n+M))=1-\delta_{n(-M)} denote that for n=M the second summand, while for n=-M the third summand degenerates to zero. The three double sums can be further simplified:
\sum\limits_{n=-M}^{n=M}\sum\limits_{k=-2M}^{2M}r_{xx}[k] =(2M+1)\sum\limits_{k=-2M}^{2M}r_{xx}[k]. (3)

The second double sum can be written by:
(1-\delta_{nM})\sum\limits_{n=-M}^{M}\sum\limits_{k=-2M}^{-M-n-1}r_{xx}[k]=\underbrace{0}_{n=M}+\underbrace{r_{xx}[-2M]}_{n=M-1} +
 +\underbrace{r_{xx}[-2M]+r_{xx}[-2M+1]}_{n=M-2}
 +...+\underbrace{\sum\limits_{k=-2M}^{-1}r_{xx}[k]}_{n=-M}
=2M r_{xx}[-2M] +
 + (2M-1) r_{xx}[-2M+1] +
 + ... + r_{xx}[-1]
=\sum\limits_{u=-2M}^{0} |u|r_{xx}[u], (4)

while similar to the previous derivation the third double sum can be simplified to:
(1-\delta_{n(-M)})\sum\limits_{n=-M}^{M}\sum\limits_{k=-2M}^{-M-n-1}r_{xx}[k]=\underbrace{0}_{n=-M}+\underbrace{r_{xx}[-2M]}_{n=-M+1} +
 +\underbrace{r_{xx}[-2M]+r_{xx}[-2M+1]}_{n=-M+2}
 +...+\underbrace{\sum\limits_{k=1}^{2M}r_{xx}[k]}_{n=M}
=2M r_{xx}[2M]
 +(2M-1) r_{xx}[2M-1]
 + ... + r_{xx}[1]
=\sum\limits_{u=0}^{2M} |u|r_{xx}[u], (5)

Using (3),(4) and (5) in (2) we derive the relationship:
E\left\{\hat{\mu}_{x}^{2}\right\}=\left(\frac{1}{2M+1}\right)\sum\limits_{k=-2M}^{2M}r_{xx}[k]
 -\left(\frac{1}{2M+1}\right)^{2} \left(\sum\limits_{u=-2M}^{0} |u|r_{xx}[u]+\sum\limits_{u=0}^{2M} |u|r_{xx}[u]\right)
=\left(\frac{1}{2M+1}\right)\sum\limits_{k=-2M}^{2M}r_{xx}[k] -\left(\frac{1}{2M+1}\right)^{2} \sum\limits_{u=-2M}^{2M} |u|r_{xx}[u]
=\frac{1}{2M+1} \sum\limits_{k=-2M}^{2M}\left(1-\frac{1}{2M+1}|k|\right)r_{xx}[k] (6)

Observe that we could have replaced in (1) the sum \sum_{k=-M-n}^{M-n}r_{xx}[k] by its equivalent form \sum_{k=-M}^{M}r_{xx}[k-n] and applied relation [1, eq. (3.64), p. 59] of the hint. Then we would have come to the same result. In the approach taken in this solution we also derived the relation provided by the hint. Now relating the autocovariance to the autocorrelation function:
c_{xx}[k]=E\left\{(x[n]-\mu_{x})(x[n+k]-\mu_{x})\right\}
=\ensuremath{E\left\{x[n]x[n+k]\right\}}-\ensuremath{E\left\{x[n]\right\}}\mu_{x}-\mu_{x}\ensuremath{E\left\{x[n+k]\right\}}+\mu_{x}^{2}
=r_{xx}[k]-\mu_{x}^{2}

and replacing the autocorrelation in (6) by r_{xx}[k]=c_{xx}[k]+\mu_{x}^{2} we obtain the variance of the sample mean as:
E\left\{\hat{\mu}^{2}_{x}\right\}-\mu_{x}^{2}=\frac{1}{2M+1}\sum\limits_{k=-2M}^{2M}\left(1-\frac{|k|}{2M+1}\right)(c_{xx}[k]+\mu_{x}^{2}) - \mu_{x}^{2}
=\frac{1}{2M+1}\sum\limits_{k=-2M}^{k=2M}\left(1-\frac{|k|}{2M+1}\right)c_{xx}[k]
 + \frac{\mu_{x}^{2}}{2M+1}\sum\limits_{k=-2M}^{2M}\left(1-\frac{|k|}{2M+1}\right) -\mu_{x}^{2} (7)

Noting that [2, p. 125]: \sum_{k=0}^{N}k=\frac{N (N+1)}{2} we can simplify the second part of the previous equation by:
\sum\limits_{k=-2M}^{2M}\left(1-\frac{|k|}{2M+1}\right)=4M+1 -\frac{2}{2M+1} \sum\limits_{k=0}^{2M}k
=4M+1 -\frac{2}{2M+1}\frac{2M (2M+1)}{2}
=4M+1-2M
=2M+1 (8)

So finally we can rewrite the variance of the sample (7) mean as :
E\left\{\hat{\mu}^{2}_{x}\right\}-\mu_{x}^{2}=\frac{1}{2M+1}\sum\limits_{k=-2M}^{k=2M}\left(1-\frac{|k|}{2M+1}\right)c_{xx}[k]
 + \frac{\mu_{x}^{2}}{2M+1}(2M+1)-\mu_{x}^{2}
=\frac{1}{2M+1}\sum\limits_{k=-2M}^{k=2M}\left(1-\frac{|k|}{2M+1}\right)c_{xx}[k] (9)

which is the relation of the sample mean used in [1, eq. (3.60), p. 58]. QED.

[1] Steven M. Kay: “Modern Spectral Estimation – Theory and Applications”, Prentice Hall, ISBN: 0-13-598582-X.
[2] Granino A. Korn and Theresa M. Korn: “Mathematical Handbook for Scientists and Engineers”, Dover, ISBN: 978-0-486-41147-7.