2015
DOI: 10.1016/j.neunet.2014.12.003
|View full text |Cite
|
Sign up to set email alerts
|

Self-organizing maps based on limit cycle attractors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
14
0
2

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 15 publications
(18 citation statements)
references
References 38 publications
2
14
0
2
Order By: Relevance
“…This claim appears to be consistent with at least some neurophysiological findings [40]. Of course, there are other types of attractor states in recurrent neural networks besides fixed activity patterns, such as periodically recurring activity patterns (limit cycles) [41], and they could also be potential computational correlates. In particular, it has been proposed that instantaneous activity states of a network are inadequate for realizing conscious experience-that qualia are correlated instead with a system's activity space trajectory [35].…”
Section: Past Suggestions For Computational Correlates Of Consciousnesssupporting
confidence: 88%
“…This claim appears to be consistent with at least some neurophysiological findings [40]. Of course, there are other types of attractor states in recurrent neural networks besides fixed activity patterns, such as periodically recurring activity patterns (limit cycles) [41], and they could also be potential computational correlates. In particular, it has been proposed that instantaneous activity states of a network are inadequate for realizing conscious experience-that qualia are correlated instead with a system's activity space trajectory [35].…”
Section: Past Suggestions For Computational Correlates Of Consciousnesssupporting
confidence: 88%
“…Notice that since each activity pattern a(t) depends on the last activity pattern a(t − 1), the activity pattern of a map changes with time and forms a dynamical system. Specifically, we have found that limit cycles are a prominent class of attractors that are learned via self-organization [8,9]. In order to generate steady joint command output, the oscillatory activity in the joint position map needs to be "smoothed out".…”
Section: Neural Architecturementioning
confidence: 99%
“…Although our previous work has provided some positive preliminary results, it is limited in that the architecture has not been used to generate outputs and the data size is limited (50 pairs of phoneme sequences and images) [9]. More importantly, the ability to generalize to new and unseen data, a critical indicator of a successful neurocognitive architecture, is not clear.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations