This study examined the executive functioning of 55 elementary school children with and without problems in written expression. Two groups reflecting children with and without significant writing problems were defined by an average primary trait rating across two separate narratives. The groups did not differ in terms of chronological age, ethnicity, gender, socioeconomic status, special education status, or presence of attention problems or receptive vocabulary capabilities; however, they did differ in reading decoding ability, and this variable was controlled for in all analyses. Dependent measures included tasks tapping an array of executive functions grouped conceptually in accordance with a model of executive functioning reflecting the following domains: initiate, sustain, set shifting, and inhibition/stopping. Analysis of covariance (ANCOVA) procedures revealed statistically significant group differences on the initiation and set shift domains, with the sustaining domain approaching significance. Children with writing problems performed more poorly in each of these domains, although the effect sizes were small. A multiple regression that employed these four factors and the reading decoding variable to predict the primary trait score from the written narratives revealed a statistically significant regression function; however, reading decoding contributed most of the unique variance to the writing outcome. These findings point out the importance of executive functions in the written language process for elementary school students, but highlight the need to examine other variables when studying elementary school-age children with written expression problems.
This article (a) discusses the assumptions underlying the use of rating scales, (b) describes the use of information available within the context of Rasch measurement that may be useful for optimizing rating scales, and (c) demonstrates the process in two studies. Participants in the first study were 330 fourth- and fifth-grade students. Participants provided responses to the Index of Self-Efficacy for Writing. Based on category counts, average measures, thresholds and category fit statistics, the responses on the original 10-point scale were better represented by a 4-point scale. The modified 4-point scale was given to a replication sample of 668 fourth- and fifth-grade students. The rating scale structure was found to be congruent with the results from the first study. In addition, the item fit statistics and item hierarchy indicated the writing self-efficacy construct to be stable across the two samples. Combined, these results provide evidence for the generalizability of the findings and hence utility of this scale for use with samples of respondents from the same population.
Issues surrounding the psychometric properties of writing assessments have received ongoing attention. However, the reliability estimates of scores derived from various holistic and analytical scoring strategies reported in the literature have relied on classical test theory (CT), which accounts for only a single source of variance within a given analysis. Generalizability theory (GT) is a more powerful and flexible strategy that allows for the simultaneous estimation of multiple sources of error variance to estimate the reliability of test scores. Using GT, two studies were conducted to investigate the impact of the number of raters and the type of decision (relative vs. absolute) on the reliability of writing scores. The results of both studies indicated that the reliability coefficients for writing scores decline as (a) the number of raters is reduced and (b) when absolute decisions rather than relative decisions are made.Writing assessment is one of the most common types of performancebased testing. The importance of the decisions made using test results from performance-based assessments in applied settings makes it paramount that researchers establish the reliability of scores resulting from such assessments. Professionals charged with providing evidence for the psychometric properties of writing tests need to move beyond reliability estimates derived
The purpose of this study was to provide a reliable and valid classification scheme for written expression that captured the linguistic variability present in a typical elementary school sample. This empirically derived classification model was based on the following linguistic-based writing skills: (a) understandability of discourse, (b) grammar, (c) semantics, (d) spelling, and (e) reading comprehension. The sample included 257 fourth-grade (n = 142) and fifth-grade (n = 115) students (46.3% boys, 79.4% White, age range = 8;3-11;7 years; M = 10.10). All of the students were receiving their writing instruction in the regular education setting, with approximately one third receiving some type of educational assistance. The sample fell in the middle socioeconomic stratum. Cluster analytic techniques derived different possible solutions. Results of a series of internal validity studies provided strong evidence that the six-cluster solution was both stable and interpretable, with subtypes reflecting normal as well as writing disability variants. Further, the writing disability subtypes ranged from global impairment to more specific linguistic impediments. Based on their characteristics, the clusters were named (a) Average Writers (n = 102), (b) Low Semantics (n = 31), (c) Low Grammar (n = 18), (d) Expert Writers (n = 33), (e) Low Spelling-Reading (n = 13), and (f) Poor Text Quality (n = 60). Subtypes differed on the percentages of children in selected subtypes manifesting specific writing deficits as well as on selected aspects of measures of metacognition, self-efficacy, and self-regulation of the writing process. Results provide researchers with a foundation to further investigate the underlying neurolinguistic and neurocognitive processes that may strengthen or undermine students' ability to produce a quality written product and to design and implement intervention techniques to address the various subtype patterns inherent in a regular elementary school classroom.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.