2021
DOI: 10.1145/3477963
|View full text |Cite
|
Sign up to set email alerts
|

Multi-modal Open World User Identification

Abstract: User identification is an essential step in creating a personalised long-term interaction with robots. This requires learning the users continuously and incrementally, possibly starting from a state without any known user. In this article, we describe a multi-modal incremental Bayesian network with online learning, which is the first method that can be applied in such scenarios. Face recognition is used as the primary biometric, and it is combined with ancillary information, such as gender, age, height, and ti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 86 publications
(137 reference statements)
0
9
0
Order By: Relevance
“…Thus, we applied Multi-modal Incremental Bayesian Network (MMIBN) 1 with online learning (Irfan et al. 2018b , 2021 ), which is the first method for sequential and incremental learning of users that does not require any preliminary training for user recognition. It combines face recognition with soft biometrics , which are ancillary physical or behavioural characteristics, such as gender, age, height and time of interaction, that can be used to improve the recognition performance (Jain et al.…”
Section: Personalised Patient–robot Interfacementioning
confidence: 99%
See 3 more Smart Citations
“…Thus, we applied Multi-modal Incremental Bayesian Network (MMIBN) 1 with online learning (Irfan et al. 2018b , 2021 ), which is the first method for sequential and incremental learning of users that does not require any preliminary training for user recognition. It combines face recognition with soft biometrics , which are ancillary physical or behavioural characteristics, such as gender, age, height and time of interaction, that can be used to improve the recognition performance (Jain et al.…”
Section: Personalised Patient–robot Interfacementioning
confidence: 99%
“… 2018 ) on a long-term (4 weeks) HRI study in the real world with 14 participants (93.2% identification rate) and on a large artificial multi-modal dataset with 200 users (65.7% identification rate) (Irfan et al. 2018b , 2021 ).…”
Section: Personalised Patient–robot Interfacementioning
confidence: 99%
See 2 more Smart Citations
“…In order to ensure a natural level of interaction with mutual understanding (Mavridis, 2015), non-verbal features, such as gaze (through face tracking) and body movements (i.e., animated speech feature of NAOqi), were used. The interaction was personalised by recognising users with Multi-modal Incremental Bayesian Network (Irfan et al, 2018;Irfan et al, 2021), which combines face recognition with soft biometrics (age, gender, height and time of interaction), and a knowledge-base was used to record and recall user preferences.…”
Section: Applying Barista Datasets To Human-robot Interactionmentioning
confidence: 99%