In [1, p. 35 exercise 2.14] we are asked to consider the solution to the set of linear equations
\mathbf{R}\mathbf{x}=\mathbf{-r} (1)

where \mathbf{R} is a n \times n hermitian matrix given by:
\mathbf{R}=P_1\mathbf{e_1}\mathbf{e_1}^H+P_2\mathbf{e_2}\mathbf{e_2}^H (2)

and \mathbf{r} is a complex n \times 1 vector given by
\mathbf{r}=P_1 e^{j2\pi f_1}\mathbf{e_1}+P_2 e^{j2\pi f_2}\mathbf{e_2} (3)

The complex vectors are defined in [1, p.22, (2.27)]. Furthermore we are asked to assumed that f_1=k/n, f_2=l/n for k,l are distinct integers in the range [-n/2,n/2-1] for n even and [-(n-1)/2,(n-1)/2] for n odd. \mathbf{x} is defined to be a n \times 1 vector. It is requested to show that \mathbf{R} is a singular matrix (assuming  n>2 ) and that there are infinite number of solutions. A further task is to find the general solution and also the minimum norm solution of the set of linear equations. The hint provided by the exercise is to note that \mathbf{e}_1/\sqrt{n},\mathbf{e}_2/\sqrt{n} are eigenvectors of \mathbf{R} with nonzero eigenvalues and then to assume a solution of the form
\mathbf{x}=\xi_1 \mathbf{e}_1 + \xi_2 \mathbf{e}_2 + \sum\limits_{i=3}^{n}\xi_i \mathbf{e}_i (4)

where \mathbf{e_i}^H\mathbf{e_1}=0, \mathbf{e_i}^H\mathbf{e_2}=0 for i=3,4,...,n and solve for \xi_1, \xi_2.
Solution: Let \lambda_i, i=1,2,...n denote the eigenvalues of the matrix from which we know (by the provided hint) that only two are different from zero. (This fact can also be directly derived by the relations (7),(9) of [2], which show that \delta_{ik}=\frac{1}{n}\mathbf{e_i}^H\mathbf{e_k}. Thus multiplying the matrix \mathbf{R} with \frac{e_i}{\sqrt{n}} will provide the eigenvalue \lambda_1=P_1\sqrt{n} for i=1 and \lambda_2=P_2\sqrt{n} for i=2, and \lambda_i=0 for i>2.) The determinant of the matrix \mathbf{R} can be obtained by the relation Det(\mathbf{R})=\prod_{i=1}^{n}\lambda_i =0. Because the determinant is zero the matrix is singular. Let  \mathbf{x}=\xi_1 \mathbf{e}_1 + \xi_2 \mathbf{e}_2 + \sum_{i=3}^{n}\xi_i \mathbf{e}_i then
\mathbf{R} \mathbf{x}=(P_1\mathbf{e_1}\mathbf{e_1}^H+P_2\mathbf{e_2}\mathbf{e_2}^H ) \cdot  (\xi_1 \mathbf{e}_1 + \xi_2 \mathbf{e}_2 + \sum\limits_{i=3}^{n}\xi_i \mathbf{e}_i)
=nP_1\xi_1\mathbf{e_1}+nP_2\xi_2\mathbf{e_2} (5)

To find the solutions to the set of linear equations
\mathbf{R}\mathbf{x}=\mathbf{-r} (6)

we have to set (5) equal to \mathbf{-r} :
\mathbf{-r}=nP_1\xi_1\mathbf{e_1}+nP_2\xi_2\mathbf{e_2}
-(P_1 e^{j2\pi f_1}\mathbf{e_1}+P_2 e^{j2\pi f_2}\mathbf{e_2} )=nP_1\xi_1\mathbf{e_1}+nP_2\xi_2\mathbf{e_2}


P_1(\frac{1}{n}e^{j2\pi f_1}+\xi_1) \mathbf{e_1}+P_2(\frac{1}{n}e^{j2\pi f_2}+\xi_2) \mathbf{e_2}=0 (7)

Considering the linear independence of \mathbf{e_1}, \mathbf{e_2} the solutions of the set of linear equations given by (7) are:
\xi_1=-\frac{1}{n}e^{j2\pi f_1}
\xi_2=-\frac{1}{n}e^{j2\pi f_2}
\xi_i,arbitrary \; for \; i>2

The minimum norm solution has the property that it is within the subspace spanned by the columns of the matrix \mathbf{R} [3] and thus the minimum norm solution is given by: x_{min}=\xi_1 \mathbf{e}_1 + \xi_2 \mathbf{e}_2=-\frac{1}{n}e^{j2\pi f_1}  \mathbf{e}_1 -\frac{1}{n}e^{j2\pi f_2} \mathbf{e}_2.

[1] Steven M. Kay: “Modern Spectral Estimation – Theory and Applications”, Prentice Hall, ISBN: 0-13-598582-X.
[2] Chatzichrisafis: “Solution of exercise 2.8 from Kay’s Modern Spectral Estimation - Theory and Applications”, lysario.de.
[3] Chatzichrisafis: “Solution of exercise 2.4 from Kay’s Modern Spectral Estimation -Theory and Applications”, lysario.de.