2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops 2010
DOI: 10.1109/cvprw.2010.5543751
|View full text |Cite
|
Sign up to set email alerts
|

A Graphical Model for unifying tracking and classification within a multimodal Human-Robot Interaction scenario

Abstract: This paper introduces our research platform for enabling a multimodal Human-Robot Interaction scenario as well as our research vision: approaching problems in a holistic way to realize this scenario. However, in this paper the main focus is laid on the image processing domain, where our vision has been realized by combining particle tracking and Dynamic Bayesian Network classification in a unified Graphical Model. This combination allows for enhancing the tracking process by an adaptive motion model realized v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2012
2012
2012
2012

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…For the most reliable communication between robot and human, multimodal interfaces are used these days. In those interfaces for the interaction the principles of multiple senses, most often hearing and sight, are used [2,3]. In general, unimodal interaction, mainly based on recognition of acoustic signals has lower requirements for technical equipment and computational complexity.…”
Section: Introductionmentioning
confidence: 99%
“…For the most reliable communication between robot and human, multimodal interfaces are used these days. In those interfaces for the interaction the principles of multiple senses, most often hearing and sight, are used [2,3]. In general, unimodal interaction, mainly based on recognition of acoustic signals has lower requirements for technical equipment and computational complexity.…”
Section: Introductionmentioning
confidence: 99%