2022
DOI: 10.1016/j.cognition.2022.105127
|View full text |Cite
|
Sign up to set email alerts
|

Speaking and gesturing guide event perception during message conceptualization: Evidence from eye movements

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 55 publications
0
4
0
Order By: Relevance
“…On the other hand, speakers of Turkish and Japanese typically distributed path and manner information across different clauses in speech and also tended to produce separate gestures for path and manner. Similar findings have been replicated across different languages and language pairs, such as English (Kita et al, 2007), Farsi (Akhavan et al, 2017), French (Gullberg et al, 2008), Turkish (Mamus et al, 2022(Mamus et al, , 2023Ünal et al, 2022), Dutch-Turkish (ter Bekke et al, 2022), Turkish-English (Özçalışkan et al, 2016a, 2016b; Özyürek et al, 2005), Korean-English (Choi & Lantolf, 2008) and Japanese-English-Turkish (Kita & Özyürek, 2003). Thus, co-speech gesture often follows the typological patterns in motion event encoding defined by Talmy (2000).…”
Section: Introductionmentioning
confidence: 69%
See 2 more Smart Citations
“…On the other hand, speakers of Turkish and Japanese typically distributed path and manner information across different clauses in speech and also tended to produce separate gestures for path and manner. Similar findings have been replicated across different languages and language pairs, such as English (Kita et al, 2007), Farsi (Akhavan et al, 2017), French (Gullberg et al, 2008), Turkish (Mamus et al, 2022(Mamus et al, , 2023Ünal et al, 2022), Dutch-Turkish (ter Bekke et al, 2022), Turkish-English (Özçalışkan et al, 2016a, 2016b; Özyürek et al, 2005), Korean-English (Choi & Lantolf, 2008) and Japanese-English-Turkish (Kita & Özyürek, 2003). Thus, co-speech gesture often follows the typological patterns in motion event encoding defined by Talmy (2000).…”
Section: Introductionmentioning
confidence: 69%
“…Most studies reviewed so far have used visual stimuli to examine motion event expressions and their patterns by using video-clips, cartoons, line drawings and so on (Akhavan et al, 2017;Gennari et al, 2002;Gullberg et al, 2008;Kita & Özyürek, 2003;Papafragou et al, 2002;Slobin et al, 2014;ter Bekke et al, 2022;Ünal et al, 2022). These studies have not taken into account whether these patterns might change depending on the modality of the input.…”
Section: Role Of Sensory Modality and Visual Experience In Multimodal...mentioning
confidence: 99%
See 1 more Smart Citation
“…Nowadays, the rapid development of artificial intelligence and the Internet of Things (IoT) has brought great changes to people’s lives, and human–machine interaction sets application requirements and research ideas for artificial intelligence and the Internet of Things (IoT). The technology of how to improve the efficiency of communication between human and machine is the most difficult problem for the researchers to overcome, in which the communication system connecting human and machine is particularly important. Nowadays, there are two main methods of human–machine interaction: voice–machine interaction and gesture–machine interaction. Voice signals contain rich character information on humans and are easy to use, so people can use voice signals to make and realize accurate commands. At present, the main sensors currently used for speech recognition are capacitive sensors, , piezoresistive sensors, , piezoelectric sensors, and triboelectric sensors. However, some components in capacitive and piezoresistive sensors, such as conventional commercial microphones, rely on battery charging and are vulnerable to environmental noise.…”
Section: Introductionmentioning
confidence: 99%