2021
DOI: 10.48550/arxiv.2102.01633
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stronger Separation of Analog Neuron Hierarchy by Deterministic Context-Free Languages

Abstract: We analyze the computational power of discrete-time recurrent neural networks (NNs) with the saturated-linear activation function within the Chomsky hierarchy. This model restricted to integer weights coincides with binarystate NNs with the Heaviside activation function, which are equivalent to finite automata (Chomsky level 3) recognizing regular languages (REG), while rational weights make this model Turing-complete even for three analog-state units (Chomsky level 0). For the intermediate model αANN of a bin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 23 publications
0
1
0
Order By: Relevance
“…It follows that the DCFL ′ language L # is DCFL ′ -simple under the truth-table reduction by Mealy machines. Since this reduction can be implemented by 1ANNs, we achieve the desired stronger separation DCFL ′ ⊆ (2ANN \ 1ANN) in the analog neuron hierarchy [10]. This result constitutes a non-trivial application of the proposed concept of DCFL ′ -simple problem.…”
Section: Introductionmentioning
confidence: 83%
“…It follows that the DCFL ′ language L # is DCFL ′ -simple under the truth-table reduction by Mealy machines. Since this reduction can be implemented by 1ANNs, we achieve the desired stronger separation DCFL ′ ⊆ (2ANN \ 1ANN) in the analog neuron hierarchy [10]. This result constitutes a non-trivial application of the proposed concept of DCFL ′ -simple problem.…”
Section: Introductionmentioning
confidence: 83%