Communications in Mathematical Sciences
Volume 19 (2021)
Optimal sample complexity of subgradient descent for amplitude flow via non-Lipschitz matrix concentration
Pages: 2035 – 2047
We consider the problem of recovering a real-valued $n$-dimensional signal from $m$ phaseless, linear measurements and analyze the amplitude-based non-smooth least squares objective. We establish local convergence of subgradient descent with optimal sample complexity based on the uniform concentration of a random, discontinuous matrix-valued operator arising from the objective’s gradient dynamics. While common techniques to establish uniform concentration of random functions exploit Lipschitz continuity, we prove that the discontinuous matrix-valued operator satisfies a uniform matrix concentration inequality when the measurement vectors are Gaussian as soon as $m=\Omega(n)$ with high probability. We then show that satisfaction of this inequality is sufficient for subgradient descent with proper initialization to converge linearly to the true solution up to the global sign ambiguity. As a consequence, this guarantees local convergence for Gaussian measurements at optimal sample complexity. The concentration methods in the present work have previously been used to establish recovery guarantees for a variety of inverse problems under generative neural network priors. This paper demonstrates the applicability of these techniques to more traditional inverse problems and serves as a pedagogical introduction to those results.
phase retrieval, subgradient descent, concentration inequality, non-convex optimization
2010 Mathematics Subject Classification
90C26, 94A12, 94A15
Paul Hand was supported by NSF Grant DMS-2022205 and NSF CAREER Grant DMS-1848087.
Oscar Leong acknowledges support of the NSF Graduate Research Fellowship under Grant No. DGE-1450681.
Received 1 November 2020
Accepted 18 May 2021
Published 7 September 2021