2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW) 2019
DOI: 10.1109/aciiw.2019.8925179
|View full text |Cite
|
Sign up to set email alerts
|

Detecting F-formations & Roles in Crowded Social Scenes with Wearables: Combining Proxemics & Dynamics using LSTMs

Abstract: In this paper, we investigate the use of proxemics and dynamics for automatically identifying conversing groups, or so-called F-formations. More formally we aim to automatically identify whether wearable sensor data coming from 2 people is indicative of F-formation membership. We also explore the problem of jointly detecting membership and more descriptive information about the pair relating to the role they take in the conversation (i.e. speaker or listener). We jointly model the concepts of proxemics and dyn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
1

Relationship

3
4

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 30 publications
0
9
0
Order By: Relevance
“…This includes advances in integrating active learning approaches in order to better personalize the prediction models for specific users based on their feedback. Lastly, this proposed architecture should be integrated with work on interpreting the meaning of social signals such as body language in social interactions [Rosatelli et al, 2019]. This would allow the agent to take into account the dynamics of a social situation as it unfolds, allowing it to integrate social situation understanding based on social relations with observed social signals.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This includes advances in integrating active learning approaches in order to better personalize the prediction models for specific users based on their feedback. Lastly, this proposed architecture should be integrated with work on interpreting the meaning of social signals such as body language in social interactions [Rosatelli et al, 2019]. This would allow the agent to take into account the dynamics of a social situation as it unfolds, allowing it to integrate social situation understanding based on social relations with observed social signals.…”
Section: Discussionmentioning
confidence: 99%
“…Kola et al [2019] propose an approach to model arbitrary social situations through a two-level ontology distinguishing situation cues and social relationship features (social background model). Rosatelli et al [2019] propose an approach where data from wearable sensors is processed with deep learning techniques to assess information such as roles in social interactions.…”
Section: Level 1: Social Situation Perceptionmentioning
confidence: 99%
“…ConfLab enables more robust models to be developed to conceptualize and detect social involvement. The use of the Chalcedony badges mentioned in the MatchNMingle dataset show more promising results using their radio-based proximity sensor and acceleration data [27]. However, they are still far away from performing sufficiently for more downstream tasks due to the relatively low sample frequency (20Hz) and annotation frequency (1Hz) [16].…”
Section: Related Workmentioning
confidence: 99%
“…In such a context, the most important cues are interpersonal distances and relative body orientations [72]. Furthermore, many works address the detection of Fformations [41,64,66,91,93,122,158,160,172,211], which are spatial arrangements that people spontaneously form during free-standing social interactions. In F-formations context, people communicating with one another tend to maintain a close proximity while having a shared focus of attention.…”
Section: Dyads and Groupsmentioning
confidence: 99%