Proceedings of the 14th ACM International Conference on Multimodal Interaction 2012
DOI: 10.1145/2388676.2388772
|View full text |Cite
|
Sign up to set email alerts
|

Linking speaking and looking behavior patterns with group composition, perception, and performance

Abstract: This paper addresses the task of mining typical behavioral patterns from small group face-to-face interactions and linking them to social-psychological group variables. Towards this goal, we define group speaking and looking cues by aggregating automatically extracted cues at the individual and dyadic levels. Then, we define a bag of nonverbal patterns (Bag-of-NVPs) to discretize the group cues. The topics learnt using the Latent Dirichlet Allocation (LDA) topic model are then interpreted by studying the corre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
56
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 40 publications
(56 citation statements)
references
References 17 publications
0
56
0
Order By: Relevance
“…Head gesture and gaze are approximated using head posture and motion. [10] points to visual focus of attention as a key feature to link impressions from group members. [17] also points to gaze states as effective feature to detects important statements.…”
Section: Personality Trait Inference In Groupmentioning
confidence: 99%
See 1 more Smart Citation
“…Head gesture and gaze are approximated using head posture and motion. [10] points to visual focus of attention as a key feature to link impressions from group members. [17] also points to gaze states as effective feature to detects important statements.…”
Section: Personality Trait Inference In Groupmentioning
confidence: 99%
“…On the other hand, unsupervised learning approaches can often find intermediate representations that help understand these higher concepts [10].…”
Section: Unsupervised Interaction Analysismentioning
confidence: 99%
“…That research focuses on a listener's head gesture recognition in a dyadic interaction. [21], [22] use a latent Dirichlet allocation (LDA) model for mining context features in group. In [22], group features for speaking status and gaze state are used for input to an LDA.…”
Section: Interaction Feature Extraction In Conversationmentioning
confidence: 99%
“…Conversely, we used the various kind of feature set not only speech and gaze cues, but also hand and head gesture features for modeling the performance. Main objective of [21], [22] is finding context features to link role of participant or personality trait. Our objective is modeling to predict storyteliing performance using context features and is quite different with these objective.…”
Section: Interaction Feature Extraction In Conversationmentioning
confidence: 99%
See 1 more Smart Citation