2020
DOI: 10.48550/arxiv.2012.13349
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Solving Mixed Integer Programs Using Neural Networks

Abstract: Mixed Integer Programming (MIP) solvers rely on an array of sophisticated heuristics developed with decades of research to solve large-scale MIP instances encountered in practice. Machine learning offers to automatically construct better heuristics from data by exploiting shared structure among instances in the data. This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an op… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
125
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 62 publications
(125 citation statements)
references
References 37 publications
0
125
0
Order By: Relevance
“…The authors also explore non gradient-based approaches and initialized weights to accelerate convergence of gradient-based algorithms. Finally, there has been research directed towards using ANNs to solve MIPs [18]. In contrast to the previous literature, we propose methods that directly use MIP techniques for training ANNs as opposed to only being used in their evaluation.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The authors also explore non gradient-based approaches and initialized weights to accelerate convergence of gradient-based algorithms. Finally, there has been research directed towards using ANNs to solve MIPs [18]. In contrast to the previous literature, we propose methods that directly use MIP techniques for training ANNs as opposed to only being used in their evaluation.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The proposed hybrid architecture improves the weak model by extracting high-level structural information at the initial point by the GNN model and preserves the original GNN model's ability to generalize to harder problems than trained on. Nair et al [79] adopted imitation learning to obtain a MILP brancher, where the GNN approach was expanded by implementing a large amount of parallel GPU computation. By mimicking an ADMM-based expert and combining the branching rule and primal heuristic, their work is advantageous over the SCIP [80] in terms of solving time on five benchmarks in real life.…”
Section: Supervised Learning In Branchingmentioning
confidence: 99%
“…A relatively large amount of literature focuses on embedding learning methods into primal heuristics, including but not limited to [79,[99][100][101][102][103][104][105]. Khalil et al [99] used binary classification to predict whether a primal heuristic would succeed at a given node.…”
Section: Learning In Heuristics In Selectingmentioning
confidence: 99%
See 1 more Smart Citation
“…This GCNN-based approach for the first time reported results better than a solver (SCIP) that uses presolving, primal heuristics, and cuts. Later, [23] built on the GCNN method by incorporating an ADMM-based expert to scale-up the full strong branching to large instances. Other works include [26] where authors show that using the entire B&B tree can further boost the imitation learning, or [14] where the imitation learning was made faster by switching to a small MLP after the root node of the tree.…”
Section: Related Workmentioning
confidence: 99%