2013
DOI: 10.3389/fnins.2013.00234
|View full text |Cite
|
Sign up to set email alerts
|

Event-driven visual attention for the humanoid robot iCub

Abstract: Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 35 publications
(30 citation statements)
references
References 38 publications
0
30
0
Order By: Relevance
“…On one side, this debate involves positions that, also based on psychological evidence (e.g., in relation to pop-out effects, [107]), propose models that assign a prominent role to bottom-up features guiding visual exploration on the basis of stimulus-based saliency maps [53], [58], [88]. On the other side, the debate sees positions stressing the pivotal role of top-down task-dependent processes to explain human visual attention [41], [65], [79], [90], [96], [102].…”
Section: The Bottom-up Componentmentioning
confidence: 99%
“…On one side, this debate involves positions that, also based on psychological evidence (e.g., in relation to pop-out effects, [107]), propose models that assign a prominent role to bottom-up features guiding visual exploration on the basis of stimulus-based saliency maps [53], [58], [88]. On the other side, the debate sees positions stressing the pivotal role of top-down task-dependent processes to explain human visual attention [41], [65], [79], [90], [96], [102].…”
Section: The Bottom-up Componentmentioning
confidence: 99%
“…The latency of an event-based visual attention was two order less than frame-based one (Rea et al, 2013). Recognition of playing-card suit was achieved as a deck was flicked through (30 ms exposure) (Serrano-Gotarredona and Linares-Barranco, 2015).…”
Section: Event-driven Vision For Robotsmentioning
confidence: 99%
“…The motion segmentation problem has also been formulated as one of determining salient regions in spatiotemporal data. Rea et al (2013) implemented a selective saliency model on the iCub platform (Metta et al, 2008), using multiple bottom-up feature maps responsible for contrast, orientation and motion. Serrano-Gotarredona et al (2009) illustrated a parallel very large scale integrated (VLSI) system using the address-event representation (AER) framework, called convolution AER vision architecture for real-time systems (CAVIAR), for object recognition and tracking.…”
Section: Introductionmentioning
confidence: 99%