Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021
DOI: 10.24963/ijcai.2021/390
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive Losses and Solution Caching for Predict-and-Optimize

Abstract: Many decision-making processes involve solving a combinatorial optimization problem with uncertain input that can be estimated from historic data. Recently, problems in this class have been successfully addressed via end-to-end learning approaches, which rely on solving one optimization problem for each training instance at every epoch. In this context, we provide two distinct contributions. First, we use a Noise Contrastive approach to motivate a family of surrogate loss functions, based on viewing non-optima… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…While contrastive learning of visual representations (Hjelm et al, 2019;He et al, 2020;Chen et al, 2020) and graph representations (You et al, 2020;Tong et al, 2021) have been studied extensively, it has not been explored much for COPs. Mulamba et al (2021) derive a contrastive loss for decision-focused learning to solve COPs with uncertain inputs that can be learned from historical data, where they view non-optimal solutions as negative samples. Duan et al (2022) use contrastive pre-training to learn good representations for the boolean satisfiability problem.…”
Section: Contrastive Learning For Copsmentioning
confidence: 99%
“…While contrastive learning of visual representations (Hjelm et al, 2019;He et al, 2020;Chen et al, 2020) and graph representations (You et al, 2020;Tong et al, 2021) have been studied extensively, it has not been explored much for COPs. Mulamba et al (2021) derive a contrastive loss for decision-focused learning to solve COPs with uncertain inputs that can be learned from historical data, where they view non-optimal solutions as negative samples. Duan et al (2022) use contrastive pre-training to learn good representations for the boolean satisfiability problem.…”
Section: Contrastive Learning For Copsmentioning
confidence: 99%
“…Blackbox differentiation: For this general formulation of LSAP a wide range of work of predict-andoptimize schemes providing an end-to-end optimization ability to combinatorial optimization problems exist based on e.g., the interpolation of optimization mappings (Vlastelica et al, 2020;Sahoo et al, 2023), continuous relaxations (Amos & Kolter, 2017;Elmachtoub & Grigas, 2017;Wilder et al, 2019) or methods that bypass the calculation of gradients for the optimizer entirely using surrogate losses (Mulamba et al, 2021;Shah et al, 2022). For an in-depth review and comparison of existing approaches, the mindful reader is referred to Geng et al (2023).…”
Section: Track Construction As a Differentiable Assignment Problemmentioning
confidence: 99%
“…There is also some work that discusses alternatives to DFL that don't require coming up with custom relaxations to the optimization problems of interest. Mulamba et al [11] provide a contrastive learning-based proxy objective that doesn't require differentiating through (or even solving) the optimization problem z * (ŷ). However, this approach makes a strong assumption about the nature of the decision loss DL, while our approach can learn surrogates that are tailored to any optimization problem.…”
Section: Related Workmentioning
confidence: 99%