2017
DOI: 10.48550/arxiv.1701.06501
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Maximum likelihood estimation of determinantal point processes

Abstract: Determinantal point processes (DPPs) have wide-ranging applications in machine learning, where they are used to enforce the notion of diversity in subset selection problems. Many estimators have been proposed, but surprisingly the basic properties of the maximum likelihood estimator (MLE) have received little attention. The difficulty is that it is a non-concave maximization problem, and such functions are notoriously difficult to understand in high dimensions, despite their importance in modern machine learni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…First, our final objective function is nonconvex, and our algorithm is only guaranteed to increase its objective function. Experimental evidence suggests that our approach recovers the synthetic kernel, but more work is needed to study the maximizers of the likelihood, in the spirit of Brunel et al [2017a] for finite DPPs, and the properties of our fixed point algorithm. Second, the estimated integral kernel does not have any explicit structure, other than being implicitly forced to be low-rank because of the trace penalty.…”
Section: Discussionmentioning
confidence: 99%
“…First, our final objective function is nonconvex, and our algorithm is only guaranteed to increase its objective function. Experimental evidence suggests that our approach recovers the synthetic kernel, but more work is needed to study the maximizers of the likelihood, in the spirit of Brunel et al [2017a] for finite DPPs, and the properties of our fixed point algorithm. Second, the estimated integral kernel does not have any explicit structure, other than being implicitly forced to be low-rank because of the trace penalty.…”
Section: Discussionmentioning
confidence: 99%
“…Likelihood estimation in this setting, based on the observation of n i.i.d. discrete DPPs, has been studied in Brunel et al (2017), who investigate asymptotic properties when n tends to infinity. In the continuous case, likelihood estimation based on n i.i.d.…”
Section: Introductionmentioning
confidence: 99%
“…Likelihood estimation in this setting, based on the observation of n i.i.d. discrete DPPs, has been studied in [10], who investigate asymptotic properties when n tends to infinity. In the continuous case, likelihood estimation based on n i.i.d.…”
Section: Introductionmentioning
confidence: 99%