2012
DOI: 10.1007/978-3-642-33715-4_1
|View full text |Cite
|
Sign up to set email alerts
|

Diverse M-Best Solutions in Markov Random Fields

Abstract: Abstract. Much effort has been directed at algorithms for obtaining the highest probability (MAP) configuration in probabilistic (random field) models. In many situations, one could benefit from additional highprobability solutions. Current methods for computing the M most probable configurations produce solutions that tend to be very similar to the MAP solution and each other. This is often an undesirable property. In this paper we propose an algorithm for the Diverse M-Best problem, which involves finding a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
155
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 110 publications
(160 citation statements)
references
References 31 publications
0
155
0
Order By: Relevance
“…Another potential of our algorithm is that since we decompose the process into subproblems, for each stage we can use any proper model for specific tasks. For example, we can use other methods to produce more diverse multiple hypotheses, such as [2,15]. For the categorization, we only use the best positive samples for training, but during the inference, the segmentation results from test images are usually not as good as training ones.…”
Section: Discussionmentioning
confidence: 99%
“…Another potential of our algorithm is that since we decompose the process into subproblems, for each stage we can use any proper model for specific tasks. For example, we can use other methods to produce more diverse multiple hypotheses, such as [2,15]. For the categorization, we only use the best positive samples for training, but during the inference, the segmentation results from test images are usually not as good as training ones.…”
Section: Discussionmentioning
confidence: 99%
“…Hence, diverse solutions are preferred a over single solution. Inspired by [11], we obtain M -best solutions instead of one map solution. This is done for all the selected candidate words from the previous stage individually.…”
Section: Diversity Preserving Inferencementioning
confidence: 99%
“…We begin by generating a set of candidate words with M-best diverse solutions [11]. With these potential solutions, we refine the large lexicon by removing words from it with a large edit distance to any of the candidates, and then recompute the M-best diverse solutions.…”
Section: Introductionmentioning
confidence: 99%
“…This is a particular instance of the general problem proposed in [4]. As in our case ∆ G depends only on the part location, its score can be added directly to the data term of Eq.…”
Section: Multiple Hypothesesmentioning
confidence: 99%