Previous work has demonstrated that, when given feedback, younger adults are more likely to correct high-confidence errors compared with low-confidence errors, a finding termed the hypercorrection effect. Research examining the hypercorrection effect in both older and younger adults has demonstrated that the relationship between confidence and error correction was stronger for younger adults compared with older adults. Their results demonstrated that the relationship between confidence and error correction was stronger for younger adults compared with older adults. However, recent work suggests that error correction is largely related to prior knowledge, while confidence may primarily serve as a proxy for prior knowledge. Prior knowledge generally remains stable or increases with age; thus, the current experiment explored how both confidence and prior knowledge contributed to error correction in younger and older adults. Participants answered general knowledge questions, rated how confident they were that their response was correct, received correct answer feedback, and rated their prior knowledge of the correct response. Overall, confidence was related to error correction for younger adults, but this relationship was much smaller for older adults. However, prior knowledge was strongly related to error correction for both younger and older adults. Confidence alone played little unique role in error correction after controlling for the role of prior knowledge. These data demonstrate that prior knowledge largely predicts error correction and suggests that both older and younger adults can use their prior knowledge to effectively correct errors in memory.
Using a case study, we show that variation in oral reading rate across passages for professional narrators is consistent across readers and much of it can be explained using features of the texts being read. While text complexity is a poor predictor of the reading rate, a substantial share of variability can be explained by timing and story-based factors with performance reaching r=0.75 for unseen passages and narrator.
Power transforms have been increasingly used in linear mixed-effects models (LMMs) of chronometric data (e.g., response times [RTs]) as a statistical solution to preempt violating the assumption of residual normality. However, differences in results between LMMs fit to raw RTs and transformed RTs have reignited discussions on issues concerning the transformation of RTs. Here, we analyzed three word-recognition megastudies and performed Monte Carlo simulations to better understand the consequences of transforming RTs in LMMs. Within each megastudy, transforming RTs produced different fixed- and random-effect patterns; across the megastudies, RTs were optimally normalized by different power transforms, and results were more consistent among LMMs fit to raw RTs. Moreover, the simulations showed that LMMs fit to optimally normalized RTs had greater power for main effects in smaller samples, but that LMMs fit to raw RTs had greater power for interaction effects as sample sizes increased, with negligible differences in Type I error rates between the two models. Based on these results, LMMs should be fit to raw RTs when there is no compelling reason beyond nonnormality to transform RTs and when the interpretive framework mapping the predictors and RTs treats RT as an interval scale.
No abstract
Variability in oral reading fluency (ORF), an indicator of foundational reading skills, has been linked to characteristics of texts. Such text-based variability in ORF has been traditionally attributed to text complexity, but substantial text-based variability has still been observed after accounting for text complexity. We consider that oral reading requires pronouncing the text aloud, which makes it subject to the same articulatory and prosodic constraints as other types of speech productions. Thus, texts with similar levels of complexity may still differ in expected durations when read aloud because of the texts’ segmental and prosodic structure, leading to differences in reading rate. We propose that these production-related effects are also important sources of text-based ORF variability. Data from upper elementary school students in the United States reading a large variety of passages from a popular fiction book showed that a composite measure of production-related effects (i.e., reading rate estimates derived from a text-to-speech synthesis system) explained a substantial amount of text-based ORF variability. Follow-up exploratory analyses indicated that these production-related effects are robust. Because text complexity metrics consist of features that also tap into production constraints, our results motivate an updated interpretation of text complexity effects on ORF and highlight the importance of accounting for production-related effects on ORF, which remain to be acknowledged in the ORF literature as potential sources of text-based variability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.