Although word frequency is often associated with the cognitive load on the reader and is widely used for automated text complexity assessment, to date, no eye-tracking data have been obtained on the effectiveness of this parameter for text complexity prediction for the Russian primary school readers. Besides, the optimal ways for taking into account the frequency of individual words to assess an entire text complexity have not yet been precisely determined. This article aims to fill these gaps. The study was conducted on a sample of 53 children of primary school age. As a stimulus material, we used 6 texts that differ in the classical Flesch readability formula and data on the frequency of words in texts. As sources of the frequency data, we used the common frequency dictionary based on the material of the Russian National Corpus and DetCorpus - the corpus of literature addressed to children. The speed of reading the text aloud in words per minute averaged over the grades was employed as a measure of the text complexity. The best predictive results of the relative reading time were obtained using the lemma frequency data from the DetCorpus. At the text level, the highest correlation with the reading speed was shown by the text coverage with a list of 5,000 most frequent words, while both sources of the lists - Russian National Corpus and DetCorpus - showed almost the same correlation values. For a more detailed analysis, we also calculated the correlation of the frequency parameters of specific word forms and lemmas with three parameters of oculomotor activity: the dwell time, fixations count, and the average duration of fixations. At the word-by-word level, the lemma frequency by DetCorpus demonstrated the highest correlation with the relative reading time. The results we obtained confirm the feasibility of using frequency data in the text complexity assessment task for primary school children and demonstrate the optimal ways to calculate frequency data.
This article describes the results of a pilot experimental study of the perception and understanding of digital multimodal texts using the methods of eye-tracking and linguistic analysis. The aim of the study was to test the hypothesis that the distribution of visual attention during image perception in multimodal texts corresponds to the degree of semantic relationship between the image and other parts of the stimulus. Multimodal texts in the format "static image + written utterance" selected from real expert practice were used as stimulus material. The study involved 84 participants of varying degrees of awareness of extremist discourse. In the first part of the experiment, respondents viewed and interpreted the content of the multimodal texts, while their eye movements were recorded; in the second part, respondents viewed, marked and commented on the same stimuli, with the task to find and explain the semantic correlations of image and verbal parts. The subjects of analysis were (a) the interest area (IA) dwell time normalized by area size (b) the number of views of the IA, (c) the average number of connections of the IA, reduced to (d) the average rank of connectivity of the area with other components. No significant differences were found depending on the respondents’ degree of awareness on extremist discourse, neither in the average number of IA correlations, nor in the distribution of ranks. IAs containing images of faces were found to have the highest number of views. In a single-factor analysis of variance, the ranks of connectivity had a significant effect on the number of IA views in all texts (p<0.05), with both direct and inverse correlations of these factors. According to these results, we can say that there is no universal connection between such parameters of viewing multimodal texts as dwell time normalized for area or the number of views and the number of connections of an image component with others. We suppose the effectiveness of the search for meanings that provide coherence to the components of a multimodal text does not depend on the amount of time and eye movements needed for visual perception, sufficient to understand the text.
The article presents the results of a pilot study of the perception of the demotivator and meme genres. It was a part of an experimental study of psychophysiological and psycholinguistic features of perception and understanding of multimodal extremist texts. The aim of the study is to develop and test the hypothesis about the influence of genre on perception of multimodal texts. To test the hypothesis, we analyze the respondents' eye movement data from the main experimental study (n = 60; 31 forensic linguists with anti-extremism practice, 29 non-experts). Research methods were eye-tracking and quantitative data processing. The following statistically reliable data were obtained: compared to memes, respondents looked at demotivators (1) for a longer time, made (2) shorter fixations, (3) with more of them, and also made (4) faster and (5) shorter saccades. These parameters may indicate a denser scanning pattern of viewing demotivators compared to memes and greater cognitive expenditure in assessing the semantic content of demotivator texts. The results of the study suggest a connection between genre and the degree of multimodal texts perception complexity. This provides an opportunity for further research in this direction and, in the future, will enable the development of norms of cognitive load of judicial linguists who analyse multimodal extremist texts.
This article describes the results of a pilot experimental study of the perception and understanding of digital multimodal texts using the methods of eye-tracking and linguistic analysis. The aim of the study was to test the hypothesis that the distribution of visual attention during image perception in multimodal texts corresponds to the degree of semantic connectivity between the image and other parts of the stimulus. Multimodal texts in the format "static image + written utterance" selected from real expert practice were used as stimulus material. The study involved 84 participants of varying degrees of awareness of extremist discourse. In the first part of the experiment, respondents viewed and interpreted the content of the multimodal texts, while their eye movements were recorded; in the second part, respondents viewed, marked and commented on the same stimuli, with the task to find and explain the semantic correlations of image and verbal parts. The subjects of analysis were (a) the interest area (IA) dwell time normalized by area size (b) the number of views of the IA, (c) the average number of connections of the IA, reduced to (d) the average rank of connectivity of the area with other components. No significant differences were found depending on the respondents’ degree of awareness on extremist discourse, neither in the average number of IA correlations, nor in the distribution of ranks. IAs containing images of faces were found to have the highest number of views. In a single-factor analysis of variance, the ranks of connectivity had a significant effect on the number of IA views in all texts (p<0.05), with both direct and inverse correlations of these factors. According to these results, we can say that there is no universal connection between such parameters of viewing multimodal texts as dwell time normalized for area or the number of views and the number of connections of an image component with others. We suppose the effectiveness of the search for meanings that provide coherence to the components of a multimodal text does not depend on the amount of time and eye movements needed for visual perception, sufficient to understand the text.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.