2015 IEEE International Conference on Computer Vision (ICCV) 2015
DOI: 10.1109/iccv.2015.211
|View full text |Cite
|
Sign up to set email alerts
|

Inferring M-Best Diverse Labelings in a Single One

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(24 citation statements)
references
References 18 publications
0
24
0
Order By: Relevance
“…As a result, the λ m multipliers cannot provide a good trade-off between energy and diversity. Secondly, the incremental (greedy) update of x m does not guarantee that all the m modes are placed in local maxima [11]. Recent joint m-Best methods [12], which do not enforce the MAP solution as part of the set of solutions, outperform sequential ones both in terms of quality and runtime, but the difference in runtime grows with m. Thus, a trade-off between sequential and joint m-best can be implemented by expanding a tree-like structure, such as a binary tree partitioning (BTP) [24] [23], but driven by heuristic variable selection.…”
Section: Sequential Vs Joint M-bestmentioning
confidence: 99%
“…As a result, the λ m multipliers cannot provide a good trade-off between energy and diversity. Secondly, the incremental (greedy) update of x m does not guarantee that all the m modes are placed in local maxima [11]. Recent joint m-Best methods [12], which do not enforce the MAP solution as part of the set of solutions, outperform sequential ones both in terms of quality and runtime, but the difference in runtime grows with m. Thus, a trade-off between sequential and joint m-best can be implemented by expanding a tree-like structure, such as a binary tree partitioning (BTP) [24] [23], but driven by heuristic variable selection.…”
Section: Sequential Vs Joint M-bestmentioning
confidence: 99%
“…We use the minimum structural Hamming distance (SHD) between a set of candidate Bayesian networks and the ground truth network to measure the collective capability of the candidate set in discovering the true underlying structure. This is called oracle accuracy, i.e., the best one among the top results, which is commonly used in the literature on multiple diverse predictions (Batra et al 2012;Kirillov et al 2015;Chen et al 2013).…”
Section: Accuracy In Structure Learningmentioning
confidence: 99%
“…Such a diverse set of mode models are expected to provide a much better coverage of true underlying model. This work is inspired by recent success in developing methods for finding multiple diverse predictions, including Diverse M-Best (Batra et al 2012), joint Diverse M-Best (Kirillov et al 2015), and M-Modes (Chen et al 2013;2018). Based on a global-local theorem showing that a mode Bayesian network must be optimal in all local scopes, we introduce an A* search algorithm to efficiently find the top M Bayesian networks which are highly probable and naturally diverse.…”
Section: Introductionmentioning
confidence: 99%
“…The proposed solution is to sequentially find several local optima and force them to be different from each other by introducing diversity constraints in the objective function. It has recently been shown that it is provably more effective to solve for diverse MAPs jointly but under the same set of constraints [20]. However, none of these methods provide a generic and practical way to choose local constraints to be enforced over variable sub-groups.…”
Section: Mean Field Inferencementioning
confidence: 99%