2015 IEEE International Symposium on Circuits and Systems (ISCAS) 2015
DOI: 10.1109/iscas.2015.7169173
|View full text |Cite
|
Sign up to set email alerts
|

Scene stitching with event-driven sensors on a robot head platform

Abstract: This paper describes a robot head platform which holds a pair of event-based Dynamic Vision Sensor (DVS) retinas and microphones connected to an event-based binaural AEREAR2 VLSI cochlea system. The platform has 6 degrees of freedom (DOF): 2 for the neck, and 2 for each of the DVS retinas. Two applications using this platform are described: the first is image stitching of a scene larger than the field of view of the individual retinas as the head pans and tilts and the second is selective image painting of the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…Thus, the fully asynchronous event-based approach might be worth pursuing in the context of closed-loop collision-avoidance in cluttered environments. Closedloop control systems, which rely on neuromorphic sensory systems, have been proposed, ranging from vision-based pencil balancing (Conradt, Cook et al, 2009) to auditory-based source following (Klein, Conradt, & Liu, 2015). However, post-processing of sensory information was done with conventional CPUs, and the sensor itself was stationary.…”
Section: Collision Avoidance In Outdoor Cluttered Environmentsmentioning
confidence: 99%
“…Thus, the fully asynchronous event-based approach might be worth pursuing in the context of closed-loop collision-avoidance in cluttered environments. Closedloop control systems, which rely on neuromorphic sensory systems, have been proposed, ranging from vision-based pencil balancing (Conradt, Cook et al, 2009) to auditory-based source following (Klein, Conradt, & Liu, 2015). However, post-processing of sensory information was done with conventional CPUs, and the sensor itself was stationary.…”
Section: Collision Avoidance In Outdoor Cluttered Environmentsmentioning
confidence: 99%
“…The localization results for multi-speaker scenarios show an average accuracy of 90% for estimations over a 10s period. This method can be cheaply implemented on mobile platforms that can react to the location of a sound source [15,16]. The results in this work are currently limited to stationary discrete positions.…”
Section: Discussionmentioning
confidence: 99%
“…In Klein et al (2015), two DVS cameras are mounted in a robot head to provide vision. The authors designed an image stitching algorithm to represent a scene larger than the field of view of each of the retinas.…”
Section: Vision and Attentionmentioning
confidence: 99%
“…Vision (Klein et al, 2015), predator robot (Moeys et al, 2016a), robot goalies (Becanovic et al, 2002Delbruck and Lichtsteiner, 2007;Delbruck and Lang, 2013), humanoid robot (Rea et al, 2013) Algorithms Algorithms Mapping (Pérez-Carrasco et al, 2013), filtering (Ieng et al, 2014;Bidegaray-Fesquet, 2015), lifetime estimation (Mueggler et al, 2015b), classification (Li et al, 2018), compression (Brandli et al, 2014;Doutsi et al, 2015;Bi et al, 2018), prediction (Gibson et al, 2014b;Kaiser et al, 2018), high-speed frame capturing (Liu et al, 2017b;Pan et al, 2018), spiking neural networks (Dhoble et al, 2012;Stromatias et al, 2017), data transmission (Corradi and Indiveri, 2015), matching (Moser, 2015) hybrid methods Indiveri, 2011a,b, 2012;Weikersdorfer et al, 2014;Leow and Nikolic, 2015), fusion (Akolkar et al, 2015b;Rios-Navarro et al, 2015;Neil and Liu, 2016) Feature extraction Vehicle detection (Bichler et al, 2011(Bichler et al, , 2012, gesture recognition (Ahn, 2012), robot vision (Lagorce et al, 2013), hardware implementation (del Campo et al, 2013;Yousefzadeh et al, 2015;Hoseini and Linares-Barranco, 2018), optical flow (Koeth et al, 2013;Clady et al, 2017;Zhu et al, 2017a), feature extraction algorithms (Lagorce et al, 2015a;…”
Section: Vision and Attentionmentioning
confidence: 99%