2020
DOI: 10.48550/arxiv.2009.05908
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Understanding Boolean Function Learnability on Deep Neural Networks

Abstract: Computational learning theory states that many classes of boolean formulas are learnable in polynomial time. This paper addresses the understudied subject of how, in practice, such formulas can be learned by deep neural networks. Specifically, we analyse boolean formulas associated with the decision version of combinatorial optimisation problems, model sampling benchmarks, and random 3-CNFs with varying degrees of constrainedness. Our extensive experiments indicate that: (i) regardless of the combinatorial opt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…Experiments of Table 4 show how all the considered out-of-the-box LENs generalize better than decision trees on complex Boolean functions (e.g. CUB) as expected [Tavares et al, 2020], and they usually outperform BRL as well. LENs based on ReLU networks are definitely the ones with better classification accuracy, confirming the intuitions reported in Section 5.3. µ nets, however, are quite close in terms of classification accuracy.…”
Section: Resultsmentioning
confidence: 57%
See 2 more Smart Citations
“…Experiments of Table 4 show how all the considered out-of-the-box LENs generalize better than decision trees on complex Boolean functions (e.g. CUB) as expected [Tavares et al, 2020], and they usually outperform BRL as well. LENs based on ReLU networks are definitely the ones with better classification accuracy, confirming the intuitions reported in Section 5.3. µ nets, however, are quite close in terms of classification accuracy.…”
Section: Resultsmentioning
confidence: 57%
“…• It reports experimental results using three out-of-the-box preset LENs showing how they may generalize better in terms of model accuracy than established white-box models such as decision trees on complex Boolean tasks (in line with Tavares' work [Tavares et al, 2020]).…”
Section: Introductionmentioning
confidence: 80%
See 1 more Smart Citation
“…classification accuracy, explanation accuracy, and fidelity) can be explained observing how entropy-based networks are far less constrained than ψ networks, both in the architecture (our approach does not apply weight pruning) and in the loss function (our approach applies a regularization on the distributions α i and not on all weight matrices). Likewise, the main reason why the proposed approach provides a higher classification accuracy with respect to BRL and decision trees may lie in the smoothness of the decision functions of neural networks which tend to generalize better than rule-based methods, as already observed by Tavares et al (Tavares et al 2020) report in the Appendix a few examples of logic explanations extracted by each method, as well as in Fig. 4.…”
Section: Resultsmentioning
confidence: 53%