2018
DOI: 10.1371/journal.pcbi.1006400
|View full text |Cite
|
Sign up to set email alerts
|

Interactive reservoir computing for chunking information streams

Abstract: Chunking is the process by which frequently repeated segments of temporal inputs are concatenated into single units that are easy to process. Such a process is fundamental to time-series analysis in biological and artificial information processing systems. The brain efficiently acquires chunks from various information streams in an unsupervised manner; however, the underlying mechanisms of this process remain elusive. A widely-adopted statistical method for chunking consists of predicting frequently repeated c… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 13 publications
(22 citation statements)
references
References 50 publications
0
22
0
Order By: Relevance
“…We previously used paired reservoir computing for chunking, where two recurrent networks supervise each other to mimic the partner's responses to a common temporal input 55 . Although that model also learns self-consistency between input data and output data, performance was severely limited since the model required exactly the same number of output neurons as chunks.…”
Section: Discussionmentioning
confidence: 99%
“…We previously used paired reservoir computing for chunking, where two recurrent networks supervise each other to mimic the partner's responses to a common temporal input 55 . Although that model also learns self-consistency between input data and output data, performance was severely limited since the model required exactly the same number of output neurons as chunks.…”
Section: Discussionmentioning
confidence: 99%
“…This model was later extended with an RL agent that controls the gating mechanism with a learned policy [15]. Successfully segmenting the information stream into understandable units was also attempted with reservoir computing [1]. It was shown that this mechanism can be sufficient to identify event boundaries.…”
Section: Related Workmentioning
confidence: 99%
“…In addition to setup regular feedback connections as defined in RC, feedback connections mechanism can be hijacked to build teacher Nodes or reward Nodes, and help building architectures based on online learning such as the model from [2] (see sec:asabuki), or help implementing reward-modulated online learning rule like the 3-factor Hebbian learning rule proposed by [11] (ongoing work). These reward or teaching Nodes can be used to provide a connected Node with some target values for training at runtime, even if these targets values are not available before runtime, e.g.…”
Section: Feedback Loopsmentioning
confidence: 99%
“…(see [23] for a recent review): they generally include modified learning methods and architectures and some of them allow for the composition of several reservoirs (decoupled-ESNs [31], tree ESNs [7], deep reservoirs [8], hierarchical-task reservoirs [18], and more exotic architectures like Reservoir-of-Reservoirs (RoR) [4] or self-supervised pairs of reservoirs [2]).…”
Section: Introductionmentioning
confidence: 99%