2017
DOI: 10.1109/tnnls.2015.2492140
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Perceive the World as Probabilistic or Deterministic via Interaction With Others: A Neuro-Robotics Experiment

Abstract: We suggest that different behavior generation schemes, such as sensory reflex behavior and intentional proactive behavior, can be developed by a newly proposed dynamic neural network model, named stochastic multiple timescale recurrent neural network (S-MTRNN). The model learns to predict subsequent sensory inputs, generating both their means and their uncertainty levels in terms of variance (or inverse precision) by utilizing its multiple timescale property. This model was employed in robotics learning experi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
42
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 47 publications
(43 citation statements)
references
References 58 publications
1
42
0
Order By: Relevance
“…To this end, we conducted a simulation experiment in which the VBP-RNN learned to predict/generate fluctuated temporal patterns containing probabilistic transitions between prototypical patterns. Consistent with Murata et al [13], results showed that the different weighting arbitrates between two extremes at which the model develops either a deterministic dynamic structure or a probabilistic one. Analysis on simulation results clarifies how the degree of generalization in learning as well as the strength of the top-down intentionality in generating patterns changes from one extreme to another.…”
supporting
confidence: 85%
See 2 more Smart Citations
“…To this end, we conducted a simulation experiment in which the VBP-RNN learned to predict/generate fluctuated temporal patterns containing probabilistic transitions between prototypical patterns. Consistent with Murata et al [13], results showed that the different weighting arbitrates between two extremes at which the model develops either a deterministic dynamic structure or a probabilistic one. Analysis on simulation results clarifies how the degree of generalization in learning as well as the strength of the top-down intentionality in generating patterns changes from one extreme to another.…”
supporting
confidence: 85%
“…In this direction, Murata et al [13] developed a predictive coding-type stochastic RNN model inspired by the free energy minimization principle [7]. This model learns to predict the mean and variance of sensory input for each next time step of multiple perceptual sequences, mapping from its current latent state.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…The error regression process was implemented in deterministic RNNs, and it was shown how it could help the generalization capability of those models (Tani & Ito, 2003;Murata et al, 2017;Ahmadi & Tani, 2017b). This testing process through error regression bears similarities to, and is inspired by, predictive coding.…”
Section: Error Regressionmentioning
confidence: 99%
“…Related work with RNNs has implemented hierarchical RNN architectures that develop symbol-like encodings in the deeper RNN layer via gradient descent (Tani, 2003). Later, the process was termed an error regression scheme and was closely related to the free energy principle (Murata et al, 2017). Related work has also been put forward when setting internal hidden states -often referred to as parametric bias neurons -to induce particular behavioral primitives and sequences thereof by suitably trained hierarchical RNN architectures (Arie et al, 2009;Tani, 1996;.…”
Section: Introductionmentioning
confidence: 99%