2008
DOI: 10.1016/j.eswa.2007.04.010
|View full text |Cite
|
Sign up to set email alerts
|

Applicability of feed-forward and recurrent neural networks to Boolean function complexity modeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2008
2008
2020
2020

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…The width of the neural network can be tradedoff by depth to alleviate the worst-case requirement for the number of neurons (Anthony 2010). Other neural networks have also been proven to universally implement boolean formulas, such as the binary pi-sigma network (Shin and Ghosh 1991), the binary product-unit network (Zhang, Yang, and Wu 2011) A body of empirical work followed the positive theoretical results on the learnability of boolean functions: (Miller 1999) shows that parity and multiplier functions are efficiently learnable, (Franco and Anthony 2004;Franco 2006;Franco and Anthony 2006) study complexity metrics that related to the the generalisation abilities of boolean functions implemented via neural networks, (Subirats et al 2006;Subirats, Jerez, and Franco 2008) and (Zhang, Ma, and Yang 2003) show algorithms for learning boolean circuits with thresholding neural networks, while (Prasad and Beg 2009) studies pre-processing techniques for using ANNs to learn boolean circuits and in (Beg, Prasad, and Beg 2008) they study approximating a boolean function's complexity using an ANN. (Pan and Srikumar 2016) showcases how neural networks with ReLU activation implement boolean functions much more compactly than with threshold linear units.…”
Section: On Deep Boolean Function Learnabilitymentioning
confidence: 99%
“…The width of the neural network can be tradedoff by depth to alleviate the worst-case requirement for the number of neurons (Anthony 2010). Other neural networks have also been proven to universally implement boolean formulas, such as the binary pi-sigma network (Shin and Ghosh 1991), the binary product-unit network (Zhang, Yang, and Wu 2011) A body of empirical work followed the positive theoretical results on the learnability of boolean functions: (Miller 1999) shows that parity and multiplier functions are efficiently learnable, (Franco and Anthony 2004;Franco 2006;Franco and Anthony 2006) study complexity metrics that related to the the generalisation abilities of boolean functions implemented via neural networks, (Subirats et al 2006;Subirats, Jerez, and Franco 2008) and (Zhang, Ma, and Yang 2003) show algorithms for learning boolean circuits with thresholding neural networks, while (Prasad and Beg 2009) studies pre-processing techniques for using ANNs to learn boolean circuits and in (Beg, Prasad, and Beg 2008) they study approximating a boolean function's complexity using an ANN. (Pan and Srikumar 2016) showcases how neural networks with ReLU activation implement boolean functions much more compactly than with threshold linear units.…”
Section: On Deep Boolean Function Learnabilitymentioning
confidence: 99%
“…In Boolean network model, gene expression is quantized to only two levels: "T"(True) and "F"(False), or "1" and "0," respectively, denoted by , (or ). We refer to [7] for logical notations, concepts, and operators used in this paper, and refer to [2] for some related works in neural networks.…”
Section: Input-state Approach To Boolean Network I Introductionmentioning
confidence: 99%
“…Its dynamic model is assumed to be (2) If the number of nodes in a Boolean network is , then it is obvious that the state space considers of statues, which is a finite set described as . So as a dynamic process on , there must be at least a fixed point or a cycle, and eventually a trajectory starting from any initial state must enter a cycle (a fixed point can be considered as a cycle of length one).…”
Section: Input-state Approach To Boolean Network I Introductionmentioning
confidence: 99%