2004
DOI: 10.1109/tnn.2003.820839
|View full text |Cite
|
Sign up to set email alerts
|

Markovian Architectural Bias of Recurrent Neural Networks

Abstract: In this paper, we elaborate upon the claim that clustering in the recurrent layer of recurrent neural networks (RNNs) reflects meaningful information processing states even prior to training [1], [2]. By concentrating on activation clusters in RNNs, while not throwing away the continuous state space network dynamics, we extract predictive models that we call neural prediction machines (NPMs). When RNNs with sigmoid activation functions are initialized with small weights (a common technique in the RNN community… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
74
0

Year Published

2005
2005
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 155 publications
(75 citation statements)
references
References 39 publications
1
74
0
Order By: Relevance
“…In the context of Markovian jumping RNNs, the exponential stability has been studied in many papers, see e.g. [1,3,15,17,21,25,35]. In particular, the asymptotical stability has been investigated in [25] for continuous-time RNNs with Markovian jumping parameters, and stability analysis and synchronization problems have been dealt with in [15] for a class of discrete-time Markovian jumping RNNs with mixed time-delays.…”
Section: Introductionmentioning
confidence: 99%
“…In the context of Markovian jumping RNNs, the exponential stability has been studied in many papers, see e.g. [1,3,15,17,21,25,35]. In particular, the asymptotical stability has been investigated in [25] for continuous-time RNNs with Markovian jumping parameters, and stability analysis and synchronization problems have been dealt with in [15] for a class of discrete-time Markovian jumping RNNs with mixed time-delays.…”
Section: Introductionmentioning
confidence: 99%
“…As pointed out in [10], the state space of RNNs initialized with small weights is organized in Markovian way prior to any training. To assess, what has been actually learnt during the training process it is always necessary to compare performance of the trained RNNs with Markov models.…”
Section: Variable Length Markov Modelsmentioning
confidence: 99%
“…Several connectionist models directly using Markovian organization [10] of the RNN's state space were suggested. Activities of recurrent neurons in an recurrent neural network initialized with small weights are grouped in clusters [9].…”
Section: Models Using Architectural Bias Propertymentioning
confidence: 99%
See 2 more Smart Citations