In [1, p. 34 exercise 2.12] we are asked to verify the equations given for the Cholesky decomposition, [1, (2.53)-(2.55)]. Furthermore it is requested to use these equations to find the inverse of the matrix given in problem [1, 2.7].
Solution: The Cholesky decomposition provides a numeric approach for solving linear equations \mathbf{A}\mathbf{x}=\mathbf{b}. First the matrix \mathbf{A} is decomposed as a matrix product \mathbf{A}=\mathbf{L}\mathbf{D}\mathbf{L}^H, where \mathbf{L} is a lower triangular matrix with the elements on the main diagonal beeing ones l_{ii}=1, and \mathbf{D} a diagonal matrix. In order to verify [1, (2.53)] we set \mathbf{y}=\mathbf{D}\mathbf{L}^H\mathbf{x} or
\left[ {\begin{array}{*{20}c}
   1 & 0 &  \cdots  & 0  \\
   {l_{21} } & 1 &  \cdots  & 0  \\
    \vdots  &  \vdots  &  \vdots  &  \vdots   \\
   {l_{n1} } & {l_{n1} } &  \cdots  & 1  \\
\end{array}} \right]\left[ {\begin{array}{*{20}c}
   {y_1 }  \\
   {y_2 }  \\
    \vdots   \\
   {y_n }  \\
\end{array}} \right] = \left[ {\begin{array}{*{20}c}
   {b_1 }  \\
   {b_2 }  \\
    \vdots   \\
   {b_n }  \\
\end{array}} \right] (1)

which is written analytically as:
y_1=b_1
l_{21}y_1+y_2=b_2
l_{31}y_1+l_{32}y_2+y_3=b_3
\vdots
l_{n1}y_1+l_{n2}y_2+\cdots+y_n=b_n

In order to obtain the solution for the vector \mathbf{y} the previous set of equations can be rewritten as
y_1=b_1
y_2=b_2 -l_{21}y_1
y_3=b_3-l_{31}y_1- l_{32}y_2
\vdots
y_n=b_n -\sum\limits^{n-1}_{k=1}l_{nk}y_k  (2)

The last equations verify [1, (2.53)]. We had previously set \mathbf{y}=\mathbf{D}\mathbf{L}^H\mathbf{x} from which we can derive the equation \mathbf{D}^{-1}\mathbf{y}=\mathbf{L}^H\mathbf{x}. Again expanding the previous compact matrix notation the following set of equations can be obtained:
x_1+l^{*}_{21}x_2+l^{*}_{31}x_3+\cdots+l^{*}_{n1}x_n=\frac{y_1}{d_1}
x_2+l^{*}_{32}x_3+\cdots+l^{*}_{n2}x_n=\frac{y_2}{d_2}
\vdots
x_n=\frac{y_n}{d_n}	(3)

The solution for the vector \mathbf{x} can be found by rewriting the previous set of equations:
x_1=\frac{y_1}{d_1}-\sum\limits^{n}_{k=2}l_{k1}^{*}x_k
x_2=\frac{y_2}{d_2}-\sum\limits^{n}_{k=3}l_{k2}^{*}x_k
\vdots
x_u=\frac{y_u}{d_u}-\sum\limits^{n}_{k=u+1}l_{ku}^{*}x_k (4)
x_n=\frac{y_n}{d_n}

The set of equations (2,3) verify [1, (2.54)]. Equation [1, (2.55)] has still to be verified. This equation provides the algorithm to decompose the matrix \mathbf{A}. Let \mathbf{L}=[l_{ij}] with
l_{ij}  = \left\{ {\begin{array}{*{20}c}
   {l_{ij} } & {,i \ge j}  \\
   0 & {,i < j}  \\
\end{array}} \right. . (5)

Thus
\mathbf{L}^H=[l^H_{ij}]=[l^{*}_{ji}]= \left\{ {\begin{array}{*{20}c}
   {l^{*}_{ji} } & {,j \ge i}  \\
   0 & {,i < j}  \\
\end{array}} \right. . (6)

Furthermore let \mathbf{D}=[d_{ij}] with
d_{ij}  = \left\{ {\begin{array}{*{20}c}
   {d_{i} } & {,i = j}  \\
   0 & {,i \neq j}  \\
\end{array}} \right. . (7)

The elements of the matrix product \mathbf{E}=\mathbf{L}\mathbf{D} are given by
e_{ij}=\sum\limits^{i}_{m=1}l_{im}d_{mj}, (8)

and the elements a_{ij} of \mathbf{A}=\mathbf{L}\mathbf{D}\mathbf{L}^H=\mathbf{E}\mathbf{L}^H are obtained by a_{ij}=\sum^{i}_{k=1}e_{ik}l^H_{kj}=\sum^{j}_{k=1}e_{ik}l^{*}_{jk} (because l^{*}_{jk}=0 for j<k). When the elements e_{ij} are replaced by the corresponding sum (8) then the elements a_{ij} can be further expanded to: a_{ij}=\sum_{k=1}^{j}\left(\sum^{i}_{m=1}l_{im}d_{mk}\right)\cdot l_{jk}^{*}. But d_{mk}=0, for m \neq k and thus:
a_{ij}=\sum\limits^{j}_{k=1}l_{ik}d_{k} l_{jk}^{*}
a_{ij}=\sum\limits^{j-1}_{k=1}l_{ik}d_{k} l_{jk}^{*}+l_{ij}d_j l^{*}_{jj}
a_{ij}=\sum\limits^{j-1}_{k=1}l_{ik}d_{k} l_{jk}^{*}+d_j l_{ij} (9)

The last equation (9) is a consequence of the fact that l^{*}_{jj}=1. A recursive formula for l_{ij} can be derived when d_j \neq 0:
l_{ij}=\frac{a_{ij}}{d_j}-\sum\limits^{j-1}_{k=1}\frac{l_{ik}d_k l^{*}_{jk}}{d_j}, (10)

Equation (10) can be identified as the second part of the relation [1, (2.55)]. The first part is obtained by simply setting j=1. In this case the second summand degenerates and the relation can be written as: l_{i1}=\frac{a_{i1}}{d_j}. From (9) by setting i=j the following relation emerges:
a_{ii}=\sum\limits^{i-1}_{k=1}l_{ii}d_{k}l_{ik}^{*}+d_i l_{ii}
d_i=a_{ii}-\sum\limits^{i-1}_{k=1}l_{ik}d_{k} l_{ik}^{*}
d_i=a_{ii}-\sum\limits^{i-1}_{k=1}\left|l_{ik}\right|^2 d_{k} (11)

and again for i=1 the second summand degenerates and we obtain the relation d_1=a_{11}. In order to find the inverse of the real matrix given in problem [1, 2.7]:
\mathbf{A}=\left[ {\begin{array}{*{20}c}
   1 & { - a} & {a^2 }  \\
   { - a} & 1 & { - a}  \\
   {a^2 } & { - a} & 1  \\
\end{array}} \right] (12)

we can compute the solutions \mathbf{r}_n by (2, 3) for which \mathbf{A}\mathbf{r}_n=\mathbf{e}_n, \mathbf{e}_n beeing the n^{th} vector of the standard base , n=1,2,3. The inverse matrix will then be \mathbf{R}=\left[\mathbf{r}_1 \; \mathbf{r}_2 \; \mathbf{r}_3 \right] The Cholesky decomposition of the matrix \mathbf{A} is obtained by:
d_1=1
l_{11}=\frac{a_{11}}{d_1}=1
l_{21}=\frac{a_{21}}{d_1}=-a
l_{31}=\frac{a_{31}}{d_1}=\frac{a^2}{1}
d_2=a_{22}-\sum\limits^{1}_{k=1}d_k|l_{2k}|^2=1-a^2
l_{32}=\frac{a_{32}}{d_2}-\sum\limits_{k=1}^{1}\frac{l_{3k}d_kl^*_{2k}}{d_2}=\frac{-a}{1-a^2}+\frac{a^3}{-a^2}=-a
d_3=a_{33}-\sum\limits^{2}_{k=1}|l_{3k}|^2d_{k}=1-d_1|l_{31}|^2-d_2|l_{32}|^2
=1-a^4-(1-a^2)a^2=1-a^2

By equation (2) we obtain for \mathbf{e}_n=\mathbf{e}_1:
y_1=b_1=1
y_2=b_2-l_{21}y_1=-l_{21}=a
y_3=b_3-l_{31}y_1-l_{32}y_2=0-a^2+a^2=0

and thus by (3):
r_{31}=x_3=\frac{y_3}{d_3}=0
r_{21}=x_2=\frac{y_2}{d_2}-l^*_{32}x_3=\frac{a}{1-a^2}
r_{11}=x_{1}=\frac{y_1}{d_1}-l^*_{21}x_2-l^*_{31} x_3=1+\frac{a^2}{1-a^2}=\frac{1}{1-a^2}

For \mathbf{e}_n=\mathbf{e}_2 using the formula (2):
y_1=b_1=0
y_2=b_2-l_{21}y_1=1=1
y_3=b_3-l_{31}y_1-l_{32}y_2=a

and thus by (3):
r_{32}=x_3=\frac{y_3}{d_3}=\frac{a}{1-a^2}
r_{22}=x_2=\frac{y_2}{d_2}-l^*_{32}x_3=\frac{1}{1-a^2}+\frac{a^2}{1-a^2}=\frac{1+a^2}{1-a^2}
r_{12}=x_{1}=\frac{y_1}{d_1}-l^*_{21}x_2-l^*_{31} x_3=0+\frac{a+a^3}{1-a^2}-\frac{a^3}{1-a^2}=\frac{a}{1-a^2}

Finally for \mathbf{e}_n=\mathbf{e}_3 using again the formula (2):
y_1=b_1=0
y_2=b_2-l_{21}y_1=0
y_3=b_3-l_{31}y_1-l_{32}y_2=1

and thus by (3):
r_{33}=x_3=\frac{y_3}{d_3}=\frac{1}{1-a^2}
r_{23}=x_2=\frac{y_2}{d_2}-l^*_{32}x_3=\frac{a}{1-a^2}
r_{13}=x_{1}=\frac{y_1}{d_1}-l^*_{21}x_2-l^*_{31} x_3=0+\frac{a^2}{1-a^2}-\frac{a^2}{1-a^2}=0

Thus the inverse of the matrix \mathbf{A} is given by
\mathbf{R}=\left[ {\begin{array}{*{20}c}
   \frac{1}{1-a^2} & {\frac{a}{1-a^2}} & {0 }  \\
   {\frac{a}{1-a^2}} & \frac{1+a^2}{1-a^2} & {\frac{a}{1-a^2}}  \\
   {0 } & {\frac{a}{1-a^2}} & \frac{1}{1-a^2}  \\
\end{array}} \right] (13)
=\frac{1}{(1-a^2)^2}\left[ {\begin{array}{*{20}c}
   1-a^2 & a-a^3 & 0      \\
   a-a^3 & 1-a^4 & a-a^3  \\
   0 & a-a^3 & 1-a^2      \\
   \end{array}} \right] (14)

, which is exactly the solution that was obtained in [2] for problem [1, 2.7].

[1] Steven M. Kay: “Modern Spectral Estimation – Theory and Applications”, Prentice Hall, ISBN: 0-13-598582-X.
[2] Panagiotis Chatzichrisafis: “Solution of exercise 2.7 from Kay’s Modern Spectral Estimation - Theory and Applications”, lysario.de.