In [1, p. 61 exercise 3.9] we are asked to consider the real linear model
x[n]=\alpha + \beta n + z[n]  \; n=0,1,...,N-1 (1)

and find the MLE of the slope \beta and the intercept \alpha by assuming that z[n] is real white Gaussian noise with mean zero and variance \sigma_{z}^{2}. Furthermore it is requested to find the MLE of \alpha if in the linear model we set \beta=0.
Solution: The probability distribution function of the Gaussian variable
z[n]=x[n]-\alpha-\beta n (2)

with mean zero and variance \sigma_{z}^{2} is given by
f(\mathbf{z})=\frac{1}{\sqrt{(2\pi)}^{N}\sigma^{N}_{z}}e^{-\frac{1}{2\sigma^{2}_{z}} \mathbf{z^{T}}\mathbf{z}}
f(z_{0},z_{1},...,z_{N-1})=\frac{1}{\sqrt{(2\pi)}^{N}\sigma^{N}_{z}}e^{-\frac{1}{2\sigma^{2}_{z}} \sum\limits_{i=0}^{N-1}z_{i}^{2}}
(3)

The MLE estimator can be found by finding the minimum for \alpha, \beta of the exponent:
g(\alpha,\beta)=-\frac{1}{2\sigma^{2}_{z}} \sum\limits_{i=0}^{N-1}z_{i}^{2}
=-\frac{1}{2\sigma^{2}_{z}} \sum\limits_{n=0}^{N-1}(x[n]-\alpha-\beta n)^{2} (4)

The gradient of the exponent is equal to :
\nabla g(\alpha,\beta)=\left[ \begin{array}{cc} \frac{\partial  g(\alpha,\beta)}{\partial \alpha} & \frac{\partial  g(\alpha,\beta)}{\partial \beta} \end{array}\right] ^{T}
=\left[ \begin{array}{cc}  \frac{\sum\limits_{n=0}^{N-1}(x[n]-\alpha-\beta n)}{\sigma_{z}^{2}}&  \frac{\sum\limits_{n=0}^{N-1}(x[n] n-\alpha n-\beta n^{2})}{\sigma_{z}^{2}} \end{array}\right] ^{T} (5)

In order to obtain an extremum the gradient \nabla g(\alpha,\beta) has to become zero. Thus
\nabla g(\alpha,\beta)=\mathbf{0}
\left[ \begin{array}{c} {\sum\limits_{n=0}^{N-1}(x[n]-\alpha-\beta n)} \\ {\sum\limits_{n=0}^{N-1}(x[n] n-\alpha n-\beta n^{2})} \end{array}\right]=\mathbf{0}
\left[ \begin{array}{c}N \alpha + \beta \sum\limits_{n=0}^{N-1}n \\ \alpha \sum\limits_{n=0}^{N-1}n +\beta  \sum\limits_{n=0}^{N-1}n^{2}  \end{array}\right]=\left[ \begin{array}{c} {\sum\limits_{n=0}^{N-1}x[n]} \\ {\sum\limits_{n=0}^{N-1}(x[n] n)} \end{array}\right]
\left[ \begin{array}{cc}N & \sum\limits_{n=0}^{N-1}n \\ \sum\limits_{n=0}^{N-1}n &  \sum\limits_{n=0}^{N-1}n^{2} \end{array}\right]  \left[ \begin{array}{c} \alpha \\ \beta \end{array}\right]=\left[ \begin{array}{c} {\sum\limits_{n=0}^{N-1}x[n]} \\ {\sum\limits_{n=0}^{N-1}(x[n] n)} \end{array}\right] (6)

Using [2, p. 980, E-4, 1.]: \sum_{n=0}^{N-1}n=\frac{N(N-1)}{2} and [2, p. 980, E-4, 5.]: \sum_{n=0}^{N-1}n^{2}=\frac{N(N-1)(2N-1)}{6}, (6) can be written as:
\left[ \begin{array}{cc}N & \frac{N(N-1)}{2} \\ \frac{N(N-1)}{2} &  \frac{N(N-1)(2N-1)}{6} \end{array}\right]  \left[ \begin{array}{c} \alpha \\ \beta \end{array}\right]=\left[ \begin{array}{c} {\sum\limits_{n=0}^{N-1}x[n]} \\ {\sum\limits_{n=0}^{N-1}(x[n] n)} \end{array}\right] (7)

From (7) we obtain the following solutions for the intercept \alpha and the slope \beta, for N>1:
\alpha = %12 \frac{N(N-1)}{2}\frac{\frac{2N-1}{3}(\sum\limits\limits_{n=0}^{N-1}x[n])-(\sum\limits_{n=0}^{N-1}nx[n])}{N^{2}(N+1)(N-1)}
 6 \cdot \frac{\frac{2N-1}{3}(\sum\limits\limits_{n=0}^{N-1}x[n])-(\sum\limits_{n=0}^{N-1}nx[n])}{N(N+1)} (8)
\beta = %12 \frac{N \sum\limits\limits_{n=0}^{N-1}(nx[n])-\frac{N(N-1)}{2}\sum\limits\limits_{n=0}^{N-1}x[n]}{N^{2}(N+1)(N-1)}
6 \cdot \frac{ 2 \sum\limits\limits_{n=0}^{N-1}(nx[n])-(N-1)\sum\limits\limits_{n=0}^{N-1}x[n]}{N(N+1)(N-1)} (9)

If the slope \beta=0 the LE equation becomes by the same reasoning
\sum\limits_{n=0}^{N-1}(x[n]-a)=0 . (10)

The ML solution for \alpha is then
\alpha =\frac{1}{N}\sum_{n=0}^{N-1}x[n].

[1] Steven M. Kay: “Modern Spectral Estimation – Theory and Applications”, Prentice Hall, ISBN: 0-13-598582-X.
[2] Granino A. Korn and Theresa M. Korn: “Mathematical Handbook for Scientists and Engineers”, Dover, ISBN: 978-0-486-41147-7.