2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob) 2018
DOI: 10.1109/biorob.2018.8487959
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Reproduce Visually Similar Movements by Minimizing Event-Based Prediction Error

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…With the recent advances in backpropagation-like learning rules for SNN as in Kaiser et al (2019), we can learn different motion types for different tasks in same network, and start them with different go-cues. We also want to integrate event-based vision to this system to get the target and drive the adaptation as in Kaiser et al (2016), and to explore learning by demonstration as in Kaiser et al (2018). We work on extending this work form pointing to a given target to perform there a grasping or tool manipulation task.…”
Section: Discussionmentioning
confidence: 99%
“…With the recent advances in backpropagation-like learning rules for SNN as in Kaiser et al (2019), we can learn different motion types for different tasks in same network, and start them with different go-cues. We also want to integrate event-based vision to this system to get the target and drive the adaptation as in Kaiser et al (2016), and to explore learning by demonstration as in Kaiser et al (2018). We work on extending this work form pointing to a given target to perform there a grasping or tool manipulation task.…”
Section: Discussionmentioning
confidence: 99%
“…The system is capable of learning the temporal structure of space-time events, adapt to multiple scales and is able to predict future events in a video sequence. Using a DVS camera, a method is presented in Kaiser et al (2018) to learn movements from visual predictions. The proposed method consists of two phases.…”
Section: Predictionmentioning
confidence: 99%
“…Vision (Klein et al, 2015), predator robot (Moeys et al, 2016a), robot goalies (Becanovic et al, 2002Delbruck and Lichtsteiner, 2007;Delbruck and Lang, 2013), humanoid robot (Rea et al, 2013) Algorithms Algorithms Mapping (Pérez-Carrasco et al, 2013), filtering (Ieng et al, 2014;Bidegaray-Fesquet, 2015), lifetime estimation (Mueggler et al, 2015b), classification (Li et al, 2018), compression (Brandli et al, 2014;Doutsi et al, 2015;Bi et al, 2018), prediction (Gibson et al, 2014b;Kaiser et al, 2018), high-speed frame capturing (Liu et al, 2017b;Pan et al, 2018), spiking neural networks (Dhoble et al, 2012;Stromatias et al, 2017), data transmission (Corradi and Indiveri, 2015), matching (Moser, 2015) hybrid methods Indiveri, 2011a,b, 2012;Weikersdorfer et al, 2014;Leow and Nikolic, 2015), fusion (Akolkar et al, 2015b;Rios-Navarro et al, 2015;Neil and Liu, 2016) Feature extraction Vehicle detection (Bichler et al, 2011(Bichler et al, , 2012, gesture recognition (Ahn, 2012), robot vision (Lagorce et al, 2013), hardware implementation (del Campo et al, 2013;Yousefzadeh et al, 2015;Hoseini and Linares-Barranco, 2018), optical flow (Koeth et al, 2013;Clady et al, 2017;Zhu et al, 2017a), feature extraction algorithms (Lagorce et al, 2015a;…”
Section: Vision and Attentionmentioning
confidence: 99%
“…We note that a similar mechanism could be integrated in a robotic head as the one used in this paper to perform actual eye movements (see Figure 2). However, an additional mechanism to discard events resulting of the ego-motion would be required, which could be based on visual prediction [11], [13], [14].…”
Section: Covert Attention Windowmentioning
confidence: 99%