2022
DOI: 10.48550/arxiv.2211.05116
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Note on Task-Aware Loss via Reweighing Prediction Loss by Decision-Regret

Abstract: In this short technical note we propose a baseline for decision-aware learning for contextual linear optimization, which solves stochastic linear optimization when cost coefficients can be predicted based on context information. We propose a decision-aware version of predict-then-optimize. We reweigh the prediction error by the decision regret incurred by an (unweighted) pilot estimator of costs to obtain a decision-aware predictor, then optimize with cost predictions from the decision-aware predictor. This me… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 4 publications
0
4
0
Order By: Relevance
“…Multiple authors have suggested learning task-specific loss functions for DFL (Chung et al 2022;Lawless and Zhou 2022;Shah et al 2022). These approaches add learnable parameters to standard loss functions (e.g., MSE) and tune them, such that the resulting loss functions approximate the 'regret' in DQ for 'typical' predictions.…”
Section: Task-specific Loss Functionsmentioning
confidence: 99%
See 3 more Smart Citations
“…Multiple authors have suggested learning task-specific loss functions for DFL (Chung et al 2022;Lawless and Zhou 2022;Shah et al 2022). These approaches add learnable parameters to standard loss functions (e.g., MSE) and tune them, such that the resulting loss functions approximate the 'regret' in DQ for 'typical' predictions.…”
Section: Task-specific Loss Functionsmentioning
confidence: 99%
“…4. Learning predictive model M ✓ : Train the predictive model M ✓ on the loss functions learned in the previous step, e.g., a random forest (Chung et al 2022), a neural network (Shah et al 2022), or a linear model (Lawless and Zhou 2022). In this paper, we propose two modifications to the metaalgorithm above.…”
Section: Task-specific Loss Functionsmentioning
confidence: 99%
See 2 more Smart Citations