2012
DOI: 10.1007/978-3-642-31525-1_17
|View full text |Cite
|
Sign up to set email alerts
|

Towards Contextual Action Recognition and Target Localization with Active Allocation of Attention

Abstract: Abstract. Exploratory gaze movements are fundamental for gathering the most relevant information regarding the partner during social interactions. We have designed and implemented a system for dynamic attention allocation which is able to actively control gaze movements during a visual action recognition task. During the observation of a partner's reaching movement, the robot is able to contextually estimate the goal position of the partner hand and the location in space of the candidate targets, while moving … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2013
2013
2013
2013

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…A final, long-term goal of this work is to endow an artificial system, as a humanoid robot, with more advanced social skills when engaged in interactions with human partners. In a previous work complementary to this, and also aimed at achieving the above described skills, we implemented a system for dynamic attention allocation able to actively control gaze movements during a visual action recognition task [17]. Similarly to what described for the reaching prediction in the cognitive science setup described above, the system is able to predict the goal position of the partner hand while it moves towards one of a number of visible targets.…”
Section: Employing the Aos/aps Model Framework In Human-robot Interactionsmentioning
confidence: 99%
See 2 more Smart Citations
“…A final, long-term goal of this work is to endow an artificial system, as a humanoid robot, with more advanced social skills when engaged in interactions with human partners. In a previous work complementary to this, and also aimed at achieving the above described skills, we implemented a system for dynamic attention allocation able to actively control gaze movements during a visual action recognition task [17]. Similarly to what described for the reaching prediction in the cognitive science setup described above, the system is able to predict the goal position of the partner hand while it moves towards one of a number of visible targets.…”
Section: Employing the Aos/aps Model Framework In Human-robot Interactionsmentioning
confidence: 99%
“…Robotic implementation can represent a valuable testbed for the AOS/APS social interaction model, and at this stage we are able to advance some hypotheses of the effects we expect to observe by applying the model to real world interactions. First of all, the system good performance in action prediction (see [17]) should allow for a fast and reliable detection of the switching point between the AOS dominated resonance phase and the APS controlled social response. Second, we expect to observe a further improvement in such performance, consistently with the additional confidence the system can achieve in certain classes of social interactions, by practicing them.…”
Section: Employing the Aos/aps Model Framework In Human-robot Interactionsmentioning
confidence: 99%
See 1 more Smart Citation
“…a robotic salesman or a robotic bar-tender), with multiple users that move, act and interact independently in the scene, seeking attention and possibly service by the robotic agent. In such cases, torso [20] and/or face pose estimation [8,18,21] are identified as important attentive cues and are further utilized by fusion modules to initiate interaction with specific user(s). A specific drawback that many pose recovery approaches present [19,20,4], is the explicit (e.g.…”
Section: Introductionmentioning
confidence: 99%