Table of Links
1.1 ESPRIT algorithm and central limit error scaling
1.4 Technical overview and 1.5 Organization
2 Proof of the central limit error scaling
3 Proof of the optimal error scaling
4 Second-order eigenvector perturbation theory
5 Strong eigenvector comparison
5.1 Construction of the “good” P
5.2 Taylor expansion with respect to the error terms
5.3 Error cancellation in the Taylor expansion
C Deferred proofs for Section 2
D Deferred proofs for Section 4
E Deferred proofs for Section 5
F Lower bound for spectral estimation
1.2 Contribution
The main contribution of this paper is a novel analysis of the ESPRIT algorithm, which demonstrates the optimal error scaling of ESPRIT. These findings are summarized in the following theorem:
Remark 1.5 (The assumption α > 1). In the above theorem, we assume α > 1 to simplify the form of our bounds. Note that Theorem 1.4 still yields results for the ESPRIT algorithm under small noise and zero noise Ej ≡ 0 since α is only required to be an upper bound on the sub-Gaussian rate of tail decay. Our proof can be extended to provide sharper estimates for small values of α.
We present the proof of Theorem 1.6 in Appendix F. This result is similar to the Cramér–Rao bound in signal processing (cf. [SN89]). Due to the different settings, we provide a self-contained proof using the total variation distance between Gaussian distributions.
This paper is available on arxiv under CC BY 4.0 DEED license.
Authors:
(1) Zhiyan Ding, Department of Mathematics, University of California, Berkeley;
(2) Ethan N. Epperly, Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA;
(3) Lin Lin, Department of Mathematics, University of California, Berkeley, Applied Mathematics and Computational Research Division, Lawrence Berkeley National Laboratory, and Challenge Institute for Quantum Computation, University of California, Berkeley;
(4) Ruizhe Zhang, Simons Institute for the Theory of Computing.