2020
DOI: 10.48550/arxiv.2006.08852
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Counterexample-Guided Learning of Monotonic Neural Networks

Abstract: The widespread adoption of deep learning is often attributed to its automatic feature construction with minimal inductive bias. However, in many real-world tasks, the learned function is intended to satisfy domain-specific constraints. We focus on monotonicity constraints, which are common and require that the function's output increases with increasing values of specific input features. We develop a counterexample-guided technique to provably enforce monotonicity constraints at prediction time. Additionally, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…One approach is to specify a loss function penalising the network for outputs that violate the constraints, as done in Sill and Abu-Mostafa [1997]. Alternatively, counterexample-guided learning Sivaraman et al [2020] can be employed, to ensure the trained network does not produce any of these outputs when given the corresponding input. Lastly, as Theorem B equates counterfactual ordering with monotonicity, a recent method uses "look-up tables" Gupta et al [2016] to enforce monotonicity, and has been implemented in TensorFlow Lattice.…”
Section: : End Formentioning
confidence: 99%
“…One approach is to specify a loss function penalising the network for outputs that violate the constraints, as done in Sill and Abu-Mostafa [1997]. Alternatively, counterexample-guided learning Sivaraman et al [2020] can be employed, to ensure the trained network does not produce any of these outputs when given the corresponding input. Lastly, as Theorem B equates counterfactual ordering with monotonicity, a recent method uses "look-up tables" Gupta et al [2016] to enforce monotonicity, and has been implemented in TensorFlow Lattice.…”
Section: : End Formentioning
confidence: 99%
“…In this case it can still be shown that the corresponding PDE operator is strongly monotone and the Browder-Minty theorem can be applied. Let us also point out that there are ways to enforce monotonicity during training [13,25,29] via a sufficiently good derivative approximation (Sobolev training) [12].…”
mentioning
confidence: 99%