In exercise [1, p. 34 exercise 2.8] we are asked to find the inverse of the  n\times n hermitian matrix \mathbf{R} given by [1, (2.27)] by recursively applying Woodbury’s identity. This should be done by considering the cases where f_{1},f_{2} are arbitrary and where f_{1}=k/n, f_{2}=l/n for k,l beeing distinct integers in the range \left[ -n/2,n/2-1 \right] for n even and \left[ -(n-1)/2,(n-1)/2 \right] for n odd.
Solution: Woodbury’s identity is given by [1, p. 24, (2.32)]:
(\mathbf{A}+\mathbf{u}\mathbf{u}^{H})^{-1}=\mathbf{A}^{-1}-\frac{\mathbf{A}^{-1}\mathbf{u}\mathbf{u}^{H}\mathbf{A}^{-1}}{1+\mathbf{u}^{H}\mathbf{A}^{-1}\mathbf{u}} (1)

The matrix given by [1, (2.27)] is
\mathbf{R}=\sigma^2 \mathbf{I}+P_1\mathbf{e_1}\mathbf{e_1}^H+P_2\mathbf{e_2}\mathbf{e_2}^H (2)
=P_2\left[ \left( \frac{\sigma^2}{P_2} \mathbf{I}+\frac{P_1}{P_2}\mathbf{e_1}\mathbf{e_1}^H \right)+ \mathbf{e_2}\mathbf{e_2}^H \right]
=P_2\left[ \frac{P_1}{P_2} \left( \frac{\sigma^2}{P_1} \mathbf{I}+\mathbf{e_1}\mathbf{e_1}^H \right)+ \mathbf{e_2}\mathbf{e_2}^H \right]

Let
\mathbf{B}=\left( \frac{\sigma^2}{P_1} \mathbf{I}+\mathbf{e_1}\mathbf{e_1}^H \right), (3)

then the marix \mathbf{R} may be rewritten as:
\mathbf{R}=P_2\left[ \frac{P_1}{P_2} \mathbf{B}+ \mathbf{e_2}\mathbf{e_2}^H \right]. (4)

Applying the relation (c\mathbf{A})^{-1}=\frac{1}{c}\mathbf{A}^{-1} and Woodbury’s identity to (4) yields:
\mathbf{R}^{-1}=\frac{1}{P_2}\left[\frac{P_2}{P_1}\mathbf{B}^{-1}-\frac{(\frac{P_2}{P_1})^{2}\mathbf{B}^{-1}\mathbf{e_2}\mathbf{e_2}^H \mathbf{B}^{-1}}{1+\frac{P_2}{P_1} \mathbf{e_2}^H\mathbf{B}^{-1}\mathbf{e_2}} \right] (5)

But the inverse of \mathbf{B} in (5) can be obtained from (3) by applying once more Woodbury’s identity:
\mathbf{B}^{-1}=\frac{P_1}{\sigma^2}\mathbf{I}-\frac{\left(\frac{P_1}{\sigma^2}\right)^2\mathbf{e_1}\mathbf{e_1}^H}{1+\frac{P_1}{\sigma^2}\mathbf{e_1}^H\mathbf{e_1}}
=\frac{P_1}{\sigma^2}\mathbf{I}-\frac{\left(\frac{P_1}{\sigma^2}\right)^2\mathbf{e_1}\mathbf{e_1}^H}{\frac{P_1}{\sigma^2}(\frac{\sigma^2}{P_1}+\mathbf{e_1}^H\mathbf{e_1})}
=\frac{P_1}{\sigma^2}\mathbf{I}-\frac{\left(\frac{P_1}{\sigma^2}\right)\mathbf{e_1}\mathbf{e_1}^H}{(\frac{\sigma^2}{P_1}+\mathbf{e_1}^H\mathbf{e_1})}
=\frac{P_1}{\sigma^2}\mathbf{I}-\frac{\left(\frac{P_1}{\sigma^2}\right)\mathbf{e_1}\mathbf{e_1}^H}{(\frac{\sigma^2}{P_1}+n)}. (6)

The previous equation is obtained by utilizing the relation
\mathbf{e_i}^H\mathbf{e_i}=\left[ {\begin{array}{*{20}c}
   1  &  {e^{-j2\pi f_i } } & \cdots  &  {e^{-j2\pi (n - 1)f_i } }  
\end{array}} \right] \left[ {\begin{array}{*{20}c}
   1  \\
   {e^{j2\pi f_i } }  \\
    \vdots   \\
   {e^{j2\pi (n - 1)f_i } }  \\
\end{array}} \right]
=n . (7)

Note further that the vectors \mathbf{e_1} and \mathbf{e_2} are normal:
\mathbf{e_1}^H\mathbf{e_2}=\left[ {\begin{array}{*{20}c}
   1  &  {e^{-j2\pi f_1 } } & \cdots  &  {e^{-j2\pi (n - 1)f_1 } }  
\end{array}} \right] \left[ {\begin{array}{*{20}c}
   1  \\
   {e^{j2\pi f_2 } }  \\
    \vdots   \\
   {e^{j2\pi (n - 1)f_2 } }  \\
\end{array}} \right]
=\sum\limits_{m=0}^{n-1}e^{(j2\pi(f_2-f_1)m)} (8)
=\frac{1-e^{(j2\pi(l-k))}}{1-e^{(j\frac{2\pi}{n}(l-k))}}
=0,  (9)

as the sum (8) is a geometric series [2, p. 16]. In order to obtain a simplified result for the inverse of the matrix \mathbf{R} we have to compute also the matrix products of \mathbf{B}^{-1}\mathbf{e_2}\mathbf{e_2}^H \mathbf{B}^{-1} and \mathbf{e_2}^H\mathbf{B}^{-1}\mathbf{e_2}. Utilizing the fact that the vectors \mathbf{e_1}, \mathbf{e_2} are normal (9) and using the inverse of the matrix \mathbf{B} which is provided by (6) we obtain the following relations:
\mathbf{B}^{-1}\mathbf{e_2}=\frac{P_1}{\sigma^2}\mathbf{e_2}-\frac{\left(\frac{P_1}{\sigma^2}\right)\mathbf{e_1}\mathbf{e_1}^H\mathbf{e_2}}{(\frac{\sigma^2}{P_1}+n)}
=\frac{P_1}{\sigma^2}\mathbf{e_2}.

As also:
\mathbf{e_2}^H\mathbf{B}^{-1}=\frac{P_1}{\sigma^2}\mathbf{e_2}^H-\frac{\left(\frac{P_1}{\sigma^2}\right)\mathbf{e_2}^H\mathbf{e_1}\mathbf{e_1}^H}{(\frac{\sigma^2}{P_1}+n)}
=\frac{P_1}{\sigma^2}\mathbf{e_2}^H

Thus
\mathbf{B}^{-1}\mathbf{e_2}\mathbf{e_2}^H \mathbf{B}^{-1}=(\frac{P_1}{\sigma^2})^2 \mathbf{e_2}\mathbf{e_2}^H , (10)

and because of (7) we also can simplify
\mathbf{e_2}^H\mathbf{B}^{-1}\mathbf{e_2}=(\frac{P_1}{\sigma^2}n). (11)

Using (6) and the last two relations (10),(11) the inverse of the matrix \mathbf{R} (5) can be simplified to:
\mathbf{R}^{-1}=\frac{1}{P_2}\left[\frac{P_2}{P_1} \left(\frac{P_1}{\sigma^2}\mathbf{I}-\frac{\left(\frac{P_1}{\sigma^2}\right)\mathbf{e_1}\mathbf{e_1}^H}{(\frac{\sigma^2}{P_1}+n)} \right)-\frac{(\frac{P_2}{P_1})^{2}(\frac{P_1}{\sigma^2})^2 \mathbf{e_2}\mathbf{e_2}^H}{1+\frac{P_2}{P_1} (\frac{P_1}{\sigma^2})n} \right]
=\frac{1}{P_1} \left(\frac{P_1}{\sigma^2}\mathbf{I}-\frac{\left(\frac{P_1}{\sigma^2}\right)\mathbf{e_1}\mathbf{e_1}^H}{(\frac{\sigma^2}{P_1}+n)} \right)-\frac{1}{P_2}\frac{(\frac{P_2}{\sigma^2})^2 \mathbf{e_2}\mathbf{e_2}^H}{(1+\frac{P_2}{\sigma^2}n)}
=\frac{1}{\sigma^2}\mathbf{I}-\frac{1}{P_1}\frac{\left(\frac{P_1}{\sigma^2}\right)\mathbf{e_1}\mathbf{e_1}^H}{(\frac{\sigma^2}{P_1}+n)} -\frac{1}{P_2}\frac{(\frac{P_2}{\sigma^2}) \mathbf{e_2}\mathbf{e_2}^H}{(\frac{\sigma^2}{P_2}+n)}
=\frac{1}{\sigma^2}\mathbf{I}-\frac{1}{\sigma^2}\frac{\mathbf{e_1}\mathbf{e_1}^H}{(\frac{\sigma^2}{P_1}+n)} -\frac{1}{\sigma^2}\frac{\mathbf{e_2}\mathbf{e_2}^H}{(\frac{\sigma^2}{P_2}+n)},

which is the inverse of the matrix \mathbf{R} obtained by recursively applying Woodbury’s identity.

[1] Steven M. Kay: “Modern Spectral Estimation – Theory and Applications”, Prentice Hall, ISBN: 0-13-598582-X.
[2] Bronstein and Semdjajew and Musiol and Muehlig: “Taschenbuch der Mathematik”, Verlag Harri Deutsch Thun und Frankfurt am Main, ISBN: 3-8171-2003-6.