Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.10
|View full text |Cite
|
Sign up to set email alerts
|

Multilingual Language Models Predict Human Reading Behavior

Abstract: We analyze if large language models are able to predict patterns of human reading behavior. We compare the performance of language-specific and multilingual pretrained transformer models to predict reading time measures reflecting natural human sentence processing on Dutch, English, German, and Russian texts. This results in accurate models of human reading behavior, which indicates that transformer models implicitly encode relative importance in language in a way that is comparable to human processing mechani… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
23
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(23 citation statements)
references
References 37 publications
0
23
0
Order By: Relevance
“…The eye-tracking signal represents human reading processes aimed at language understanding. In previous work, we have shown that contextualized language models can predict eye patterns associated with human reading (Hollenstein et al, 2021), which indicates that computational models and humans encode similar linguistic patterns. It remains an open debate to which extent language models are able to approximate language understanding (Bender and Koller, 2020).…”
Section: Patterns Of Relative Importancementioning
confidence: 81%
“…The eye-tracking signal represents human reading processes aimed at language understanding. In previous work, we have shown that contextualized language models can predict eye patterns associated with human reading (Hollenstein et al, 2021), which indicates that computational models and humans encode similar linguistic patterns. It remains an open debate to which extent language models are able to approximate language understanding (Bender and Koller, 2020).…”
Section: Patterns Of Relative Importancementioning
confidence: 81%
“…It is of growing interest to researchers to be able to characterize how any text compares to that from another source, be it generated by humans or artificial language models. For example, NLP practitioners are interested in understanding and improving language model output, and bringing it closer to human generated text e.g., Ettinger (2020); Hollenstein et al (2021); Meister et al (2022); similarly, language scientists are highly interested in using large-scale language models to develop and test hypotheses about language processing in the mind and brain, e.g., Schrimpf et al (2021); Caucheteux and King (2022); Goldstein et al (2022).…”
Section: Introductionmentioning
confidence: 99%
“…It is of growing interest to researchers to be able to characterize how any text compares to that from another source, be it generated by humans or artificial language models. For example, NLP practitioners are interested in understanding and improving language model output, and bringing it closer to human generated text e.g., Ettinger (2020); Hollenstein et al (2021); Meister et al (2022); similarly, language scientists are highly interested in using large-scale language models to develop and test hypotheses about language processing in the mind and brain, e.g., Schrimpf et al (2021); Caucheteux and King (2022); Goldstein et al (2022).…”
Section: Introductionmentioning
confidence: 99%