In [1, p. 60 exercise 3.1] a 2 \times 1 real random vector \mathbf{x} is given , which is distributed according to a multivatiate Gaussian PDF with zero mean and covariance matrix:
\mathbf{C}_{xx}=
\left[
\begin{array}{cc} 
\sigma_{1}^2 & \sigma_{12} \\ 
\sigma_{21} & \sigma_{2}^2
\end{array}\right]

We are asked to find \alpha if \mathbf{y}=\mathbf{L}^{-1}\mathbf{x} where \mathbf{L}^{-1} is given by the relation:

y_{1}=x_{1}
y_{2}=\alpha x_{1}+x_{2}

so that y_{1} and y_{2} are uncorrelated and hence independent. We are also asked to find the Cholesky decomposition of \mathbf{C}_{xx} which expresses \mathbf{C}_{xx} as \mathbf{L}\mathbf{D}\mathbf{L}^{T}, where \mathbf{L} is lower triangular with 1′s on the principal diagonal and \mathbf{D} is a diagonal matrix with positive diagonal elements.
Solution: First we note that
\mathbf{y}=\left[ \begin{array}{cc}y_{1}& y_{2} \end{array}\right]^{T}
=\left[ \begin{array}{cc} 1 & 0 \\ \alpha & 1 \end{array} \right] \mathbf{x}

From the previous relation we find that  \mathbf{L}^{-1} = \left[ \begin{array}{cc} 1 & 0 \\ \alpha & 1 \end{array} \right]  . Furthermore we note that  E\left\{\mathbf{y} \right\} = \mathbf{\mu}_{\mathbf{y}} =  \mathbf{L}^{-1} E\left\{\mathbf{x}\right\} = 0, because the mean of \mathbf{x} equals zero: E\left\{ \mathbf{x} \right\}=0 . The correlation matrix of \mathbf{y} can be determined by:
\mathbf{C}_{yy}=E\left\{ \left( \mathbf{y}-\mathbf{\mu}_{\mathbf{y}}\right)\left( \mathbf{y}-\mathbf{\mu}_{\mathbf{y}}\right)^{T} \right\}
=E\left\{  \mathbf{y}\mathbf{y}^{T} \right\}
=E\left\{  \mathbf{L}^{-1} \mathbf{x} \mathbf{x}^{T} \left(\mathbf{L}^{-1}\right)^{T} \right\}
=\mathbf{L}^{-1}   E\left\{ \mathbf{x} \mathbf{x}^{T}\right\}  \left( \mathbf{L}^{-1}\right)^{T}
=\mathbf{L}^{-1}   \mathbf{C}_{xx}  \left( \mathbf{L}^{-1}\right)^{T}
=\left[ \begin{array}{cc} 1 & 0 \\ \alpha & 1 \end{array} \right]    \left[\begin{array}{cc} \sigma_{1}^2 & \sigma_{12} \\ \sigma_{21} & \sigma_{2}^2 \end{array}\right]  \left[ \begin{array}{cc} 1 & \alpha  \\ 0 & 1 \end{array} \right]
=\left[ \begin{array}{cc} 1 & 0 \\ \alpha & 1 \end{array} \right]    \left[\begin{array}{cc} \sigma_{1}^2 & \alpha \sigma_{1}^2 +\sigma_{12} \\ \sigma_{21} & \alpha \sigma_{21} + \sigma_{2}^2 \end{array}\right]
=\left[\begin{array}{cc} \sigma_{1}^2 & \alpha \sigma_{1}^2 +\sigma_{12} \\ \alpha \sigma_{1}^{2} + \sigma_{21} &  \alpha^2  \sigma_{1}^2 + \alpha ( \sigma_{12} + \sigma_{21}) + \sigma_{2}^2 \end{array}\right]

We can determine  \alpha in order for y_{1} and y_{2} to be uncorrelated. The two variables are uncorrelated when the off diagonal elements are zero. Thus for \alpha = - \frac{ \sigma_{12}}{ \sigma_{1}^2} = -\frac{ \sigma_{21}}{ \sigma_{1}^2} the variables y_{1},y_{2} are uncorrelated, which is only possible when \sigma_{12} = \sigma_{21} . Because the random variables x_{1},x_{2} are real the condition \sigma_{12} = \sigma_{21} is always fulfilled. For \alpha = - \frac{ \sigma_{12}}{ \sigma_{1}^2} = -\frac{ \sigma_{21}}{ \sigma_{1}^2} the matrix \mathbf{C}_{yy} can be rewritten as:
\mathbf{C}_{yy}=\left[\begin{array}{cc} \sigma_{1}^2 & 0 \\ 0 &  \frac{\sigma_{21}^2}{\sigma_{1}^{4}} \sigma_{1}^{2}  + -\frac{ \sigma_{21}}{ \sigma_{1}^2}  ( 2 \sigma_{21}) + \sigma_{2}^2 \end{array}\right]
=\left[\begin{array}{cc} \sigma_{1}^2 & 0 \\ 0 & \sigma_{2}^2 + \frac{\sigma_{21}^2}{\sigma_{1}^{2}}  - 2 \frac{ \sigma_{21}^{2}}{ \sigma_{1}^2}  \end{array}\right]
=\left[\begin{array}{cc} \sigma_{1}^2 & 0 \\ 0 & \sigma_{2}^2 - \frac{ \sigma_{21}^{2}}{ \sigma_{1}^2}  \end{array}\right]

We know that Gaussian random variables are independent if they are uncorrelated [2, p. 154-155]. So far we have only found a condition for \alpha in order to render y_{1},y_{2} uncorrelated. From [1, p.42] we know that the random variables y_{1},y_{2} are also Gaussian as they are obtained by a linear transform from the Gaussian random variables x_{1}, x_{2} . Thus we have found the condition for \alpha in order for y_{1},y_{2} to be uncorrelated and thus independent. Now let us proceed to find the Cholesky decomposition of \mathbf{C}_{xx}. We first assume that \sigma_{1}, \sigma_{2},\sigma_{12}=\sigma_{21} are such that \mathbf{C_{xx}} is positive definite, a condition which is necessary to decompose the correlation matrix \mathbf{C}_{xx} by Cholesky’s decomposition:
\mathbf{C}_{xx} = \mathbf{L}\mathbf{D}\mathbf{L}^{T}

where the elements of the matrix \mathbf{L}, assuming \{c_{ij}\} are the elements of the matrix \mathbf{C}_{xx}, are given by [1, p.30, (2.55)] (see also [3, relations (10) and (11) ]):
l_{ij}=\left\{ \begin{array}{cc} \frac{c_{i1}}{d_{1}} & j=1 \\ \frac{c_{ij}}{d_{j}} -\sum_{k=1}^{j-1}\frac{l_{ik}d_{k}l_{jk}^{*}}{d_{j}} & j=2,3,...\end{array} \right.

and the elements of the matrix \mathbf{D} are given by the relation:
d_{i}=c_{ii}-\sum\limits_{k=1}^{i-1}d_{k}\left| l_{ik}^{2}\right|

The corresponding elements of the matrices \mathbf{D} and \mathbf{L} are thus given by:
d_{1}=c_{11}=\sigma_{1}^{2}
l_{11}=\frac{\sigma_{1}^{2}}{\sigma_{1}^{2}}=1
l_{21}=\frac{\sigma_{21}}{\sigma_{1}^{2}}
d_{2}=c_{22}- \sum\limits_{k=1}^{1}d_{k}\left|l_{2k}\right|^{2}
=\sigma_{2}^{2} -d_{1}\left|l_{21}\right|^{2}
=\sigma_{2}^{2} -\sigma_{1}^{2}\left|\frac{\sigma_{21}}{\sigma_{1}^{2}}\right|^{2}
=\sigma_{2}^{2} -\frac{\sigma_{21}^{2}}{\sigma_{1}^{2}}
l_{12}=\frac{\sigma_{12}}{d_{2}} -\frac{l_{11}d_{1}l^{*}_{21}}{d_{2}}
=\frac{ \sigma_{12}- \sigma_{1}^{2}\frac{ \sigma_{21} }{ \sigma^{2}_{1} } }{d_{2}}
=\frac{\sigma_{12}-\sigma_{21}}{ \sigma_{2}^{2} -\frac{\sigma_{21}^{2}}{\sigma_{1}^{2}}}
=
l_{22}=\frac{\sigma_{2}^{2}-\sigma_{1}^{2}(\frac{\sigma_{21}}{\sigma_{1}^{2}})^{2}}{\sigma_{2}^{2} -\frac{\sigma_{21}^{2}}{\sigma_{1}^{2}} }
=\frac{\sigma_{2}^{2}-\frac{\sigma_{21}^{2}}{\sigma_{1}^{2}}}{\sigma_{2}^{2} -\frac{\sigma_{21}^{2}}{\sigma_{1}^{2}} }
=1

Thus the matrices \mathbf{L},\mathbf{D} are given by:
\mathbf{L}=\left[
\begin{array}{cc}
1 & 0\\
 \frac{\sigma_{21}}{\sigma_{1}^{2}}  & 1
\end{array}
\right]
\mathbf{D}=\left[
\begin{array}{cc}
\sigma_{1}^{2} & 0\\
0 & \sigma_{2}^{2} -\frac{\sigma_{21}^{2}}{\sigma_{1}^{2}}
\end{array}
\right]

And the matrix \mathbf{C}_{xx} is obtained by the Cholesky decomposition as \mathbf{C}_{xx}=\mathbf{L} \mathbf{D} \mathbf{L}^{T}. While, considering [1, p. 23, (2.29)] that for two matrices \mathbf{B}^{T}\mathbf{A}^{T}=\left( \mathbf{A}\mathbf{B}\right)^{T}, the matrix \mathbf{C}_{yy} is given by:
\mathbf{C}_{yy}=\mathbf{L}^{-1} \mathbf{C}_{xx} \left( \mathbf{L}^{-1}\right)^{T}
=\mathbf{L}^{-1}\mathbf{L} \mathbf{D} \mathbf{L}^{T}\left( \mathbf{L}^{-1}\right)^{T}
=(\mathbf{L}^{-1}\mathbf{L}) \mathbf{D}  \left( \mathbf{L}^{-1} \mathbf{L}  \right)^{T}
=\mathbf{D}.



[1] Steven M. Kay: “Modern Spectral Estimation – Theory and Applications”, Prentice Hall, ISBN: 0-13-598582-X.
[2] Papoulis, Athanasios: “Probability, Random Variables, and Stochastic Processes”, McGraw-Hill.
[3] Chatzichrisafis: “Solution of exercise 2.12 from Kay’s Modern Spectral Estimation - Theory and Applications”.