2015
DOI: 10.1007/s12193-015-0190-7
|View full text |Cite
|
Sign up to set email alerts
|

Learning multimodal behavioral models for face-to-face social interaction

Abstract: International audienceThe aim of this paper is to model multimodal perception-action loops of human behavior in face-to-face interactions. The long-term goal of this research is to give artificial agents social skills to engage believable interactions with human interlocutors. To this end, we propose trainable behavioral models that generate optimal actions given others’ perceived actions and joint goals. We first compare sequential models - in particular Discrete Hidden Markov Models (DHMMs) - with standard c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2016
2016
2018
2018

Publication Types

Select...
3
2
1

Relationship

4
2

Authors

Journals

citations
Cited by 17 publications
(18 citation statements)
references
References 57 publications
0
18
0
Order By: Relevance
“…The LSTM and Bi-LSTM can automatically learn contextual variables from the interaction scenario. In order to compare the efficiency of the methods, actions (FX and GT) generated offline by BiLSTM are first compared with HMM [5] and DBN [6]. In addition, online predictions of the actions by LSTM are also compared with short-term Viterbi decoding of HMM and online filter prediction of DBN.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…The LSTM and Bi-LSTM can automatically learn contextual variables from the interaction scenario. In order to compare the efficiency of the methods, actions (FX and GT) generated offline by BiLSTM are first compared with HMM [5] and DBN [6]. In addition, online predictions of the actions by LSTM are also compared with short-term Viterbi decoding of HMM and online filter prediction of DBN.…”
Section: Resultsmentioning
confidence: 99%
“…A multimodal interactive model based on HMM was proposed in [5]. In this model, each interactive unit (IU) is modeled by one Discrete Hidden Markov Model (DHMM) that models joint multimodal sensorimotor behaviors via its hidden states.…”
Section: Hidden Markov Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…Notice that one single CS could also sequence many sensory-motor states together. Similar concepts have been proposed in the literature of multimodal behavior modeling [23,32,35]. In our application, we used data mining techniques to explore the cognitive states of the presenter.…”
Section: Cognitive Statementioning
confidence: 99%