2016
DOI: 10.1016/j.cviu.2016.03.003
|View full text |Cite
|
Sign up to set email alerts
|

A multi-modal perception based assistive robotic system for the elderly

Abstract: International audienceIn this paper, we present a multi-modal perception based framework to realize a non-intrusive domestic assistive robotic system. It is non-intrusive in that it only starts interaction with a user when it detects the user's intention to do so. All the robot's actions are based on multi-modal perceptions which include user detection based on RGB-D data, user's intention-for-interaction detection with RGB-D and audio data, and communication via user distance mediated speech recognition. The … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(17 citation statements)
references
References 47 publications
0
17
0
Order By: Relevance
“…Thus, feature fusion was conducted instead of classifier fusion which would fuse the outputs of individual classifiers. Mollaret et al [8] also dealt with the recognition of an intention for interaction with an assistive robot. Using head and shoulder orientation and voice activity the corresponding intention could be inferred with a Hidden Markov Model.…”
Section: Related Workmentioning
confidence: 99%
“…Thus, feature fusion was conducted instead of classifier fusion which would fuse the outputs of individual classifiers. Mollaret et al [8] also dealt with the recognition of an intention for interaction with an assistive robot. Using head and shoulder orientation and voice activity the corresponding intention could be inferred with a Hidden Markov Model.…”
Section: Related Workmentioning
confidence: 99%
“…The second category of techniques for explicitly estimating availability leverages a person's demeanor, focusing on immediate social cues of availability. Social cues, such as eye contact, are largely task-independent, and as a result, models based on social cues are more easily generalizable across a wider set of applications: in robotics, the methods have been used to estimate related measures of a person's "intent-to-engage" and awareness of the robot in applications ranging from companion robots [13,42], shopping mall assistants [9,31,57,59], receptionists [7], and bartenders [21]. Some prior work has relied on external sensors such as motion capture systems, ground-mounted LIDAR, and ceiling cameras [9,31,57,59], which can be expensive and difficult to deploy in support of mobile robots traversing a large space.…”
Section: Estimating Availability and Interruption Contextmentioning
confidence: 99%
“…Some prior work has relied on external sensors such as motion capture systems, ground-mounted LIDAR, and ceiling cameras [9,31,57,59], which can be expensive and difficult to deploy in support of mobile robots traversing a large space. Other work has used onboard sensors to detect social cues of engagement [7,13,21,42]. Although engagement estimation is a separate problem from interruptibility estimation (because interruptibility can be high even when engagement is low), the problems are closely related, and we take inspiration from the work of Mollaret et al [42] and Chiang et al [13] in both our selection of audio-visual features for classification and in validating the use of Hidden Markov Models to estimate interruptibility.…”
Section: Estimating Availability and Interruption Contextmentioning
confidence: 99%
See 2 more Smart Citations