2022
DOI: 10.1007/s11063-022-10900-y
|View full text |Cite
|
Sign up to set email alerts
|

Heavy-Head Sampling for Fast Imitation Learning of Machine Learning Based Combinatorial Auction Solver

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 30 publications
0
11
0
Order By: Relevance
“…The proposed weighted column sampling (WCS) optimization framework can be divided into creating, solving, and merging the solutions of multiple smaller IPs, which are referred to as the WCS models. Each WCS model 𝑖 is derived from the MS model ( 4)-( 8) by considering only a subset N 𝑖 ⊆ N of observations, and using only a subset P 𝑒 𝑖 ⊆ P 𝑒 of feature variables 𝑏 𝑝 that are sampled according to their SIS values (17). The overall procedure of the WCS optimization is shown in Figure 1, and the major steps are described as follows.…”
Section: Weighted Column Sampling Optimizationmentioning
confidence: 99%
“…The proposed weighted column sampling (WCS) optimization framework can be divided into creating, solving, and merging the solutions of multiple smaller IPs, which are referred to as the WCS models. Each WCS model 𝑖 is derived from the MS model ( 4)-( 8) by considering only a subset N 𝑖 ⊆ N of observations, and using only a subset P 𝑒 𝑖 ⊆ P 𝑒 of feature variables 𝑏 𝑝 that are sampled according to their SIS values (17). The overall procedure of the WCS optimization is shown in Figure 1, and the major steps are described as follows.…”
Section: Weighted Column Sampling Optimizationmentioning
confidence: 99%
“…This representation is natural for MILPs and has shown promising performance. In [16], Peng et al proposed that prioritizing the sampling of certain branching decisions over others and thus providing a better branching data distribution could further improve the performance of the trained model. In [17], the authors pointed out that the GCNN-based approach relies too heavily on high-end GPU, which may not be be accessible to many practitioners.…”
Section: The Bigraph Representation For State Embeddingmentioning
confidence: 99%
“…As in [16], the results in this paper are presented in the form of "mean r ± std%" to avoid the dependence of results on different experimental environments, and "r" is the mean of Node or Time as a reference value. For example, 0.7883r ± 6.68% means that the metric is 0.7883 times the reference value, and the per-instance standard deviation is 0.0668 averaged over all instances.…”
Section: Comparison Of Problem-solving Efficiencymentioning
confidence: 99%
See 1 more Smart Citation
“…The authors in [40] proposed a hybrid Ant Colony Optimization (ACO) algorithm to solve the NP-hard nature combinatorial auction problem at the expense of more execution time. In [41], a heavy-head sampling strategy was proposed with Imitation Learning (IL) to solve the WDP of combinatorial auction. They proposed IL used with Reinforcement Learning (RL) to improve the evaluation process of CA; however, it is prone to extended training time.…”
Section: B Literature Reviewmentioning
confidence: 99%