2012
DOI: 10.1007/978-3-642-31424-7_10
|View full text |Cite
|
Sign up to set email alerts
|

Learning Boolean Functions Incrementally

Abstract: Classical learning algorithms for Boolean functions assume that unknown targets are Boolean functions over fixed variables. The assumption precludes scenarios where indefinitely many variables are needed. It also induces unnecessary queries when many variables are redundant. Based on a classical learning algorithm for Boolean functions, we develop two learning algorithms to infer Boolean functions over enlarging sets of ordered variables. We evaluate their performance in the learning-based loop invariant gener… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
4
3
1

Relationship

4
4

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 22 publications
0
8
0
Order By: Relevance
“…Since a property often depends on a subset of context variables, it suffices to find contextual assumptions over such variables. In [14], an algorithm inferring Boolean functions over relevant variables is proposed. A similar learning algorithm for BDD's may further improve the performance of symbolic assume-guarantee reasoning.…”
Section: Resultsmentioning
confidence: 99%
“…Since a property often depends on a subset of context variables, it suffices to find contextual assumptions over such variables. In [14], an algorithm inferring Boolean functions over relevant variables is proposed. A similar learning algorithm for BDD's may further improve the performance of symbolic assume-guarantee reasoning.…”
Section: Resultsmentioning
confidence: 99%
“…A more powerful well-foundedness checker should make the framework even more effective. An incremental learning algorithm for Boolean functions [9] should improve the efficiency of our technique as well.…”
Section: Discussionmentioning
confidence: 99%
“…Recently, learning has gained renewed interest in the context of program verification, particularly for synthesizing loop invariants [15,24,25,34,35,[49][50][51][52]. However, Garg et al [25] argue that merely learning from positive and negative examples for synthesizing invariants is inherently non-robust and introduce ICE-learning, which extends the classical learning setting with implications.…”
Section: Ice Learning Modelmentioning
confidence: 99%