2019
DOI: 10.1111/cogs.12718
|View full text |Cite
|
Sign up to set email alerts
|

Complex Communication Dynamics: Exploring the Structure of an Academic Talk

Abstract: Communication is a multimodal phenomenon. The cognitive mechanisms supporting it are still understudied. We explored a natural dataset of academic lectures to determine how communication modalities are used and coordinated during the presentation of complex information. Using automated and semi-automated techniques, we extracted and analyzed, from the videos of 30 speakers, measures capturing the dynamics of their body movement, their slide change rate, and various aspects of their speech (speech rate, articul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
5
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 75 publications
0
5
0
1
Order By: Relevance
“…Overall, pixel differentiation has been shown to provide a reliable measure of movement (Paxton & Dale, 2013;Romero et al, 2017) and can be used to capture movement in specific areas of the visual scene (Alviar et al, 2019;Danner et al, 2018). Note, however, that this method is particularly vulnerable to changes in background, such as movement or changes in lighting, and may not be able to capture smaller movements or movements toward the camera.…”
Section: Video-based Trackingmentioning
confidence: 99%
See 2 more Smart Citations
“…Overall, pixel differentiation has been shown to provide a reliable measure of movement (Paxton & Dale, 2013;Romero et al, 2017) and can be used to capture movement in specific areas of the visual scene (Alviar et al, 2019;Danner et al, 2018). Note, however, that this method is particularly vulnerable to changes in background, such as movement or changes in lighting, and may not be able to capture smaller movements or movements toward the camera.…”
Section: Video-based Trackingmentioning
confidence: 99%
“…The study of the temporal dynamics of gesture-speech coordination has relatively lagged behind in use of kinematic measurement methods, especially as compared to the degree to which state-of-the-art (psycho)linguistic methods are employed for the study of speech (e.g., Loehr, 2012;Shattuck-Hufnagel & Ren, 2018). This manifests itself in the relative scarcity (as compared to other research on instrumental action) of published studies that have applied motion tracking in gesture-speech research (Alexanderson, House, & Beskow, 2013;Alviar, Dale, & Galati, 2019;Chu & Hagoort, 2014;Danner, Barbosa, & Goldstein, 2018;Ishi, Ishiguro, & Hagita, 2014;Leonard & Cummins, 2010;Krivokapic, Tiede, & Tyrone, 2017;Krivokapić, Tiede, Tyrone, & Goldenberg, 2016;Parrell, Goldstein, Lee, & Byrd, 2014;Pouw & Dixon, 2019a;Quek et al, 2002;Rochet-Capellan et al, 2008;Rusiewicz et al, 2014;Treffner & Peter, 2002;Zelic, Kim, & Davis, 2015). It can be argued that the absence of motion tracking in the standard methodological toolkit of the multimodal language researcher has further led to imprecisions and conceptual confusions.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…In the current paper, we aim to innovate the quantitative study of gesture ensembles. We hope therefore to fortify ongoing innovations in the quantitative study of gesture kinematics together with speech dynamics (Alviar, Dale, & Galati, 2019;Danner, Barbosa, & Goldstein, 2018;Pouw, Trujillo, & Dixon, in press) by enriching the investigation into how gestural low-level events feed into higher-level linguistic structures (Krivokapić, 2014;Ravignani et al, 2018;Shattuck-Hufnagel & Ren, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Dado que la notación a mano de los gestos consume mucho tiempo(Ienaga et al, 2022), actualmente se están realizando esfuerzos por desarrollar herramientas fáciles de aplicar para realizar lecturas automáticas de los gestos. Estudios como los deAlviar et al (2019), Pouw y Dixon (2019) o Krivokapié et al (2017) se han centrado en la medición del movimiento. Ienaga et al (2022) han realizado automatizaciones parciales en el análisis de los gestos, pero la notación a mano todavía es necesaria.En resumen, la automatización del estudio del gesto está creciendo con fuerza, aunque todavía es necesario la anotación manual.…”
unclassified