2020
DOI: 10.48550/arxiv.2003.11657
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Graph Matching via Blackbox Differentiation of Combinatorial Solvers

Abstract: Building on recent progress at the intersection of combinatorial optimization and deep learning, we propose an end-to-end trainable architecture for deep graph matching that contains unmodified combinatorial solvers. Using the presence of heavily optimized combinatorial solvers together with some improvements in architecture design, we advance state-of-the-art on deep graph matching benchmarks for keypoint correspondence. In addition, we highlight the conceptual advantages of incorporating solvers into deep le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(11 citation statements)
references
References 43 publications
(83 reference statements)
0
11
0
Order By: Relevance
“…In a broader context, MAML [28,29] also has a neural module for joint initialization and a reasoning module that performs optimization steps for task-specific adaptation. Other examples include [6,30,31,32,33,34,35,36,37,38,39]. More specifically, perception and reasoning can be jointly formulated in the form…”
Section: Summary Of Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In a broader context, MAML [28,29] also has a neural module for joint initialization and a reasoning module that performs optimization steps for task-specific adaptation. Other examples include [6,30,31,32,33,34,35,36,37,38,39]. More specifically, perception and reasoning can be jointly formulated in the form…”
Section: Summary Of Resultsmentioning
confidence: 99%
“…Within discrete optimization methods, Evolution Strategies (ES) is a popular choice for global optimization (33)(34)(35) and has been used to map chemical space (36). ES involves a structured search that incorporates heuristics and procedures inspired by natural evolution (37).…”
Section: Inverse Designmentioning
confidence: 99%
“…Recently, [2,6] showed how to efficiently differentiate through convex cone programs by applying the implicit function theorem to a residual map introduced in [27], and [1] showed how to differentiate through convex optimization problems by an automatable reduction to convex cone programs; our method for learning convex optimization models builds on this recent work. Optimization layers have been used in many applications, including control [7,11,15,3], game-playing [46,45], computer graphics [37], combinatorial tasks [58,52,53,21], automatic repair of optimization problems [14], and data fitting more generally [9,17,16,10]. Differentiable optimization for nonconvex problems is often performed numerically by differentiating each individual step of a numerical solver [33,48,32,36], although sometimes it is done implicitly; see, e.g., [7,47,4].…”
Section: Related Workmentioning
confidence: 99%
“…tion techniques [25,26,28,34] have been proposed to make it computationally tractable. Until recently, deep graph matching (DGM) methods give birth to many more flexible formulations [13,32,39,45] besides traditional QAP. DGM aims to learn the meaningful node affinity by using deep features extracted from convolutional neural network.…”
Section: Introductionmentioning
confidence: 99%
“…DGM aims to learn the meaningful node affinity by using deep features extracted from convolutional neural network. To this end, many existing DGM methods [32,39,45] primarily focus on the feature modeling and refinement for more accurate affinity construction. The feature refinement step is expected to capture the implicit structure information [39] encoded in learnable parameters.…”
Section: Introductionmentioning
confidence: 99%