The purpose of this study was to examine specific word- and sentence-level features most frequently used in the expository writing of four groups of college writers. Three groups were writers who demonstrated disabilities. Group 1 students (n = 87) demonstrated learning disabilities (LD); Group 2 (n = 50), attention-deficit/hyperactivity disorder (ADHD); and Group 3 (n = 58), combined LD and ADHD. Group 4 consisted of writers with no history of a documented disability (n = 92). Computer-based analysis and structural equation modeling were used to group specific linguistic features identified in the expository essays across all four groups. The frequency of linguistic features, not errors, was analyzed. Four communication dimensions (factors) were identified for the four groups of writers, but the factor loadings and correlations were significantly different across groups. Furthermore, the relationships of specific linguistic features were studied as to their impact on the verbosity, quality, and lexical complexity of students' expository essays. It is interesting to note that very high correlations were found between verbosity, quality, and lexical complexity, suggesting that these constructs are not as separate in their functioning as might be supposed. Implications for assessment and instruction are provided.
In this study, the authors analyzed 2,056 spelling errors produced by 130 young adults (65 with dyslexia, 65 typically achieving), which came from two sources: a standardized spelling test and an impromptu essay-writing task. Students with dyslexia exhibited higher spelling error rates across both tasks. To characterize the inaccurate spelling attempts of both groups, the authors conducted linguistic and item-level analyses. Among unconstrained errors (essay), students with dyslexia had more difficulty than their typically achieving peers with familiar, low-level items (indexed by word frequency and number of syllables). Among constrained errors (spelling dictation), group differences in phonetic plausibility, morphological awareness, and visual accuracy varied by item. These analyses were telling on low-frequency items for which the groups obtained similar (dichotomous) accuracy rates. The authors suggest that diagnosticians and educators employ error analysis to obtain critical information not typically reflected in the standard scores used to make learning disability identification decisions.
The comprehension section of the Nelson-Denny Reading Test (NDRT) is widely used to assess the reading comprehension skills of adolescents and adults in the United States. In this study, the authors explored the content validity of the NDRT Comprehension Test (Forms G and H) by asking university students (with and without at-risk status for learning disorders) to answer the multiple-choice comprehension questions without reading the passages. Overall accuracy rates were well above chance for both NDRT forms and both groups of students. These results raise serious questions about the validity of the NDRT and its use in the identification of reading disabilities.
Intelligence tests are usually part of the assessment battery for the diagnosis of adults with learning disabilities (LD) and attention deficit hyperactivity disorder (ADHD). Professionals must ensure that inferences drawn from such test scores are equivalent across populations with and without disabilities. Examination of measurement equivalence provides a direct test of the hypothesis that the same set of latent variables underlies a set of test scores in different groups and metric relationships between observed scores and the corresponding latent variables are the same. The hypothesis of measurement equivalence was examined in two samples of college students: one sample with LD and one sample with ADHD. Scores on the third editions of the Wechsler Adult Intelligence and Memory Scales were compared with an age-matched subset of the conorming sample. Results supported the assumption of measurement equivalence but revealed marked differences across samples in latent variable variances and covariances and latent variable means.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.