2020
DOI: 10.48550/arxiv.2003.07706
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Linear Regression without Correspondences via Concave Minimization

Liangzu Peng,
Manolis C. Tsakiris

Abstract: Linear regression without correspondences concerns the recovery of a signal in the linear regression setting, where the correspondences between the observations and the linear functionals are unknown. The associated maximum likelihood function is NP-hard to compute when the signal has dimension larger than one. To optimize this objective function we reformulate it as a concave minimization problem, which we solve via branch-and-bound. This is supported by a computable search space to branch, an effective lower… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…Tsakiris and collaborators [20,21] have studied important theoretical aspects such as well-posedness from an algebraic perspective, and have also put forth practical computational schemes such as a branchand-bound algorithm (cf. also [30]) and concave maximization [31]. An approximate EM scheme with a Markov-Chain-Monte-Carlo (MCMC) approximation of the E-step is discussed in [17,32].…”
Section: Introductionmentioning
confidence: 99%
“…Tsakiris and collaborators [20,21] have studied important theoretical aspects such as well-posedness from an algebraic perspective, and have also put forth practical computational schemes such as a branchand-bound algorithm (cf. also [30]) and concave maximization [31]. An approximate EM scheme with a Markov-Chain-Monte-Carlo (MCMC) approximation of the E-step is discussed in [17,32].…”
Section: Introductionmentioning
confidence: 99%
“…• Some of the existing algorithms are of very high computational complexity, and can only handle small number of data points in low dimensions (Elhami et al, 2017;Pananjady et al, 2017a;Tsakiris et al, 2018;Peng and Tsakiris, 2020). Other algorithms choose to optimize with respect to w and π in an alteranting manner, e.g., alternating minimization in Abid et al (2017).…”
Section: Introductionmentioning
confidence: 99%