2020
DOI: 10.1371/journal.pone.0233892
|View full text |Cite
|
Sign up to set email alerts
|

Quantifying the speech-gesture relation with massive multimodal datasets: Informativity in time expressions

Abstract: The development of large-scale corpora has led to a quantum leap in our understanding of speech in recent years. By contrast, the analysis of massive datasets has so far had a limited impact on the study of gesture and other visual communicative behaviors. We utilized the UCLA-Red Hen Lab multi-billion-word repository of video recordings, all of them showing communicative behavior that was not elicited in a lab, to quantify speech-gesture co-occurrence frequency for a subset of linguistic expressions in Americ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
11
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 20 publications
(14 citation statements)
references
References 41 publications
2
11
0
1
Order By: Relevance
“…Similarly, speakers in the DURATION IS QUANTITY search packet performed 81 temporal gestures (60% of the occasions in which the hands were visible); only 8 clips (9.17%) contained a non-temporal gesture, and 36 cases (28.82%) did not present a gesture realisation. These results are consistent with previous research on the frequency of speech-gesture co-occurrence (Pagán Cánovas et al, 2020;Woodin et al, 2020). No statistically significant differences were observed between our two search packages, nor across the individual expressions of each category regarding gesture frequency (Figure 1)…”
Section: Resultssupporting
confidence: 93%
See 1 more Smart Citation
“…Similarly, speakers in the DURATION IS QUANTITY search packet performed 81 temporal gestures (60% of the occasions in which the hands were visible); only 8 clips (9.17%) contained a non-temporal gesture, and 36 cases (28.82%) did not present a gesture realisation. These results are consistent with previous research on the frequency of speech-gesture co-occurrence (Pagán Cánovas et al, 2020;Woodin et al, 2020). No statistically significant differences were observed between our two search packages, nor across the individual expressions of each category regarding gesture frequency (Figure 1)…”
Section: Resultssupporting
confidence: 93%
“…The remaining axis, the lateral, does not figure explicitly in the linguistic repertoire of any oral language; however, many psycholinguistic experiments (e.g., Santiago et al, 2007) have proved that it does play an important role in structuring time. Additionally, a great number of studies have found that, in spontaneous speech, speakers conceptualise time using the lateral axis, as evidenced by their gesturing patterns (Casasanto & Jasmin, 2012;Pagán Cánovas et al, 2020;.…”
Section: Introductionmentioning
confidence: 99%
“…The quantitative focus of our study stands in contrast to observational gesture research, which has tended to focus on detailed qualitative analysis of particular examples [15,28]. With this paper, we contribute to recent advances in large-scale, quantitative gesture research, particularly a recent study in this journal [34], which used the UCLA Red Hen Lab corpus to study hundreds of gestures related to time expressions [35]. In comparison to the Red Hen corpus, an advantage of the TV News Archive is that all its videos are immediately accessible to researchers, making our coding decisions fully transparent and reproducible [36].…”
Section: Introductionmentioning
confidence: 99%
“…For instance, research on co-speech gestures has reported the recurrent use of the lateral axis when expressing temporal concepts, locating the past on the left and the future on the right (Cienki, 1998;Casasanto and Jasmin, 2012;Walker et al, 2014;Alcaraz Carrión, 2019), at least in cultures with left-toright reading direction; in languages such as Hebrew or Arabic, this pattern is reversed (Tversky et al, 1991). This preference for the lateral axis in the gestural modality often causes speechgesture incongruences, since English speakers often gesture to their left while saying back in those days (Alcaraz Carrión, 2019;Pagán-Cánovas et al, 2020). This preference for the lateral axis has also been reported in several psycholinguistic experiments in a variety of languages (Santiago et al, 2007(Santiago et al, , 2008Weger and Pratt, 2008) as well as in cultural artifacts such as timelines (Davis, 2012;Coulson and Pagán Cánovas, 2014) and calendars (Sinha et al, 2014), and has been experimentally connected with the direction of writing (Casasanto and Bottini, 2010;Bottini et al, 2015).…”
Section: Taxonomies Of Time: An Overviewmentioning
confidence: 99%
“…At the same time, time is typically conceptualized using the domain of motion (e.g., Winter is coming; see Bender and Beller, 2014), even in expressions that do not include spatial language, as shown, for example, by gesticulation patterns (Cienki, 1998;Casasanto and Jasmin, 2012;Pagán-Cánovas et al, 2020). Temporal information can be conveyed by locating an event in relation to the speaker (Deictic Time), in relation to another event (Sequential Time) or by expressing the duration of the event.…”
Section: Introductionmentioning
confidence: 99%