Proceedings of the Sixteenth ACM Conference on Economics and Computation 2015
DOI: 10.1145/2764468.2764479
|View full text |Cite
|
Sign up to set email alerts
|

Ignorance is Almost Bliss

Abstract: The stochastic matching problem deals with finding a maximum matching in a graph whose edges are unknown but can be accessed via queries. This is a special case of stochastic k-set packing, where the problem is to find a maximum packing of sets, each of which exists with some probability. In this paper, we provide edge and set query algorithms for these two problems, respectively, that provably achieve some fraction of the omniscient optimal solution.Our main theoretical result for the stochastic matching (i.e… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(1 citation statement)
references
References 35 publications
0
1
0
Order By: Relevance
“…As such, several more low-complexity and practical strategies have been proposed to incorporate uncertainty estimation into deep neural networks, such as Monte-Carlo dropout and deep ensembles. MC-dropout-based methods [23][24][25] introduce randomness on the intermediate neurons of the network. Such methods perform multiple forward propagations of the model during testing and randomly discard some neurons in each forward propagation to obtain different predictions, which are then aggregated to obtain an uncertainty estimate for a given input.…”
Section: Uncertainty Estimation In Deep Learningmentioning
confidence: 99%
“…As such, several more low-complexity and practical strategies have been proposed to incorporate uncertainty estimation into deep neural networks, such as Monte-Carlo dropout and deep ensembles. MC-dropout-based methods [23][24][25] introduce randomness on the intermediate neurons of the network. Such methods perform multiple forward propagations of the model during testing and randomly discard some neurons in each forward propagation to obtain different predictions, which are then aggregated to obtain an uncertainty estimate for a given input.…”
Section: Uncertainty Estimation In Deep Learningmentioning
confidence: 99%