Proceedings of the 9th International Conference on Mobile and Ubiquitous Multimedia 2010
DOI: 10.1145/1899475.1899483
|View full text |Cite
|
Sign up to set email alerts
|

Recognizing conversational context in group interaction using privacy-sensitive mobile sensors

Abstract: The availability of mobile sociometric sensors allows ComputerSupported Cooperative Work (CSCW) designers the possibility to enhance online meeting support through automatic recognition of conversational context. This paper addresses the task of discriminating one conversational context against another, specifically brainstorming from decisionmaking interactions using easily computable nonverbal behavioral cues. We hypothesize that the difference in the dynamics between brainstorming and decision-making discus… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
5
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 9 publications
1
5
0
Order By: Relevance
“…In the first one, methods extract a number of features from audio (related to prosody and turn-taking) [47], [26], [44], video (related to head and body activity or gaze) [37], [24], and wearable sensors (related to body motion or physical proximity) [35], [38]. In the second step, these features are used as input to supervised or unsupervised learning methods to infer traits like dominance [48], [28], extroversion and locus of control [44], [37]; relations like roles [58], [16], [17] or status [47], [26]; group attitudes like cooperation and competition [30], tasks like brainstorming [29]; and concepts like collective intelligence [57]. Other works use the extracted features to create interactive systems that, through various visualizations of behavioral cues, affect the interaction itself [13], [35], [52], [6].…”
Section: B Social Computingmentioning
confidence: 99%
“…In the first one, methods extract a number of features from audio (related to prosody and turn-taking) [47], [26], [44], video (related to head and body activity or gaze) [37], [24], and wearable sensors (related to body motion or physical proximity) [35], [38]. In the second step, these features are used as input to supervised or unsupervised learning methods to infer traits like dominance [48], [28], extroversion and locus of control [44], [37]; relations like roles [58], [16], [17] or status [47], [26]; group attitudes like cooperation and competition [30], tasks like brainstorming [29]; and concepts like collective intelligence [57]. Other works use the extracted features to create interactive systems that, through various visualizations of behavioral cues, affect the interaction itself [13], [35], [52], [6].…”
Section: B Social Computingmentioning
confidence: 99%
“…The findings of this investigation complement those of earlier studies. These results advance prior research (Jayagopi et al, 2010) by increasing the number of modes predicted and demonstrating that they can be predicted outside of a laboratory setting.…”
Section: Discussionsupporting
confidence: 82%
“…Scholars in these domains have investigated human proxemic and paralinguistic behavior since the 1920s. The utilization of paralinguistic and proxemic modalities has shown to vary across tasks, and some studies have found a structural difference between communication patterns in teams during different CCPS tasks (Jayagopi et al, 2010;.…”
Section: Introductionmentioning
confidence: 99%
“…Picard [149] advocated the use of sentiment and emotion detection using wearable technology to improve the understanding of a user information need. Jayagopi et al [89], instead, used mobile sociometric sensors to recognise the user conversational context in order to enhance online meeting support. The use of non-verbal cues was used to characterise the entire group and to discriminate the context of the conversation by the aggregation of participants nonverbal behaviour, seen on a temporal perspective.…”
Section: Spoken Inputmentioning
confidence: 99%