2017
DOI: 10.1137/16m110109x
|View full text |Cite
|
Sign up to set email alerts
|

On the Estimation Performance and Convergence Rate of the Generalized Power Method for Phase Synchronization

Abstract: An estimation problem of fundamental interest is that of phase (or angular) synchronization, in which the goal is to recover a collection of phases (or angles) using noisy measurements of relative phases (or angle offsets). It is known that in the Gaussian noise setting, the maximum likelihood estimator (MLE) has an expected squared ℓ2-estimation error that is on the same order as the Cramér-Rao lower bound. Moreover, even though the MLE is an optimal solution to a non-convex quadratic optimization problem, it… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
89
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 78 publications
(100 citation statements)
references
References 23 publications
3
89
0
Order By: Relevance
“…This coincides with the setting studied in [71,82] over the orthogonal group SO.2/, under the name of synchronization [3,9,55]. It has been shown that the leading eigenvector of a certain data matrix becomes positively correlated with the truth as long as 0 > 1= p n when p obs D 1 [71].…”
Section: Extension: Large-m Casesupporting
confidence: 79%
“…This coincides with the setting studied in [71,82] over the orthogonal group SO.2/, under the name of synchronization [3,9,55]. It has been shown that the leading eigenvector of a certain data matrix becomes positively correlated with the truth as long as 0 > 1= p n when p obs D 1 [71].…”
Section: Extension: Large-m Casesupporting
confidence: 79%
“…• Phase synchronization [92,93]. Suppose we wish to recover n unknown phases φ 1 , · · · , φ n ∈ [0, 2π] given their pairwise relative phases.…”
Section: Projected Power Methods For Constrained Pcamentioning
confidence: 99%
“…It follows from (29) that φ is either a zero vector (i.e., φ = 0), or a left singular vector of Y (i.e., φ = αp j for some j ∈ [r]). Plugging φ = αp j into (29) gives…”
Section: A Proof Of Lemmamentioning
confidence: 99%
“…Outside of the context of neural networks, such geometric analysis (characterizing the behavior of all critical points) has been recognized as a powerful tool for understanding nonconvex optimization problems in applications such as phase retrieval [35,40], dictionary learning [41], tensor factorization [12], phase synchronization [29] and low-rank matrix optimization [3,13,14,25,26,34,45,46]. A similar regularizer (see (6)) to the one used in (2) is also utilized in [13,25,34,45,46] for analyzing the optimization geometry.…”
Section: Introductionmentioning
confidence: 99%