Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan)
DOI: 10.1109/ijcnn.1993.714247
|View full text |Cite
|
Sign up to set email alerts
|

Generalization of the maximum capacity of recurrent neural networks

Abstract: In our previous work, we have proposed a novel model, which presents the maximum capacity of 1-layer recurrent neural networks by using an initiator, A, to construct the weight matrix and threshold and to define an equation, which produces ail memorized vectors. In this paper, we generalize that model by lifting the restriction of A and give the new version of our model.Besides the explanation of new version of that model, we give more discussion about it. We also compare our model with the SOR method.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
3
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 6 publications
0
3
0
Order By: Relevance
“…Theorem 4.1: Let M π denote the set of all absolutely continuous densities w.r.t π. Then for any density π on hypothesis class F , any δ ∈ (0, 1], and (16) the following inequality holds with probability at least 1−2δ…”
Section: Remark 41 (Interpretation Of Constants)mentioning
confidence: 99%
See 2 more Smart Citations
“…Theorem 4.1: Let M π denote the set of all absolutely continuous densities w.r.t π. Then for any density π on hypothesis class F , any δ ∈ (0, 1], and (16) the following inequality holds with probability at least 1−2δ…”
Section: Remark 41 (Interpretation Of Constants)mentioning
confidence: 99%
“…E[(L(f ) − V N (f )) r ].Then using Lemma A 15. we getE e λ(L(f )−VN (f )) )4(r − 1)G e (f ) 2r (A.153)Now using Lemma A 16. we obtainE e λ(L(f )−VN (f )) ≤ 1 + 1 N ∞ r=2 (m + r − 1)!…”
mentioning
confidence: 92%
See 1 more Smart Citation