2022
DOI: 10.31234/osf.io/fcj8b
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Conversational eyebrow frowns facilitate question identification: An online VR study

Abstract: Conversation is a time-pressured environment. Recognising a social action (the ‘speech act’, such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions, since a long gap may be meaningful by itself. Human language is multimodal, involving speech as well as visual signals from the body, including the face. But little is known about how conver… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

4
2

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 52 publications
0
4
0
Order By: Relevance
“…Likewise, facial signals are integrated with speech not only to inform emotion recognition ( Pourtois et al, 2005 ) but also to complement speech and contribute to the semantic and pragmatic meaning of an utterance ( Bavelas & Chovil, 2018 ; Bavelas et al, 2014 ; Chovil, 1991 ; Domaneschi et al, 2017 ; Frith, 2009 ; Nota et al, 2021 , 2022 ). During natural conversation we produce hand gestures and facial signals alongside our speech that change how our utterance should be interpreted.…”
Section: Gestalt Perception Within and Across Sensory Modalitiesmentioning
confidence: 99%
See 1 more Smart Citation
“…Likewise, facial signals are integrated with speech not only to inform emotion recognition ( Pourtois et al, 2005 ) but also to complement speech and contribute to the semantic and pragmatic meaning of an utterance ( Bavelas & Chovil, 2018 ; Bavelas et al, 2014 ; Chovil, 1991 ; Domaneschi et al, 2017 ; Frith, 2009 ; Nota et al, 2021 , 2022 ). During natural conversation we produce hand gestures and facial signals alongside our speech that change how our utterance should be interpreted.…”
Section: Gestalt Perception Within and Across Sensory Modalitiesmentioning
confidence: 99%
“…However, when we are in an interaction, the task is not just to understand the semantic content of an utterance but to very rapidly know how to respond. Part of what guides this will be in the percept itself (i.e., the ambient array of acoustic and visual energy; e.g., Benitez-Quiroz et al, 2016 ; Cienki, 2005 ; Domaneschi et al, 2017 ; Hömke et al, 2022 ; Nota et al, 2022 ; Zhang et al, 2021 ), but an important part of it will be in multiple levels of the interactional context: potential adjacency pairs, prior discourse, common ground, cultural norms, and so on. For example, as soon as we are aware that an utterance is a question, the type of response that is afforded is narrowed down, even if we do not know what the question is about or what our specific answer should be.…”
Section: Gestalt Perception Within and Across Sensory Modalitiesmentioning
confidence: 99%
“…Theoretically, VR has proven particularly valuable for studying multimodal and multichannel communication (e.g., Chu & Hagoort, 2014;Hömke et al, 2018;Nota et al, 2022;Raghavan et al, 2023). Traditional studies often presented participants with only visual or auditory information, focusing on a single bodily channel, such as the voice or face.…”
Section: Vr: Basic Concepts and Affordances For Language Researchmentioning
confidence: 99%
“…Moreover, visual signals produced earlier in the question may exert greater influence on fast responding (for such effects of eyebrow movements, see Nota et al, 2022 ). As the gesture timing spontaneously varied across questions (i.e.…”
Section: Introductionmentioning
confidence: 99%