Abstract:Two experiments investigated how goal setting and progress feedback affect self-efficacy and writing achievement. Children received writing strategy instruction and were given a process goal of learning the strategy, a product goal of writing paragraphs, or a general goal of working productively. Half of the process goal children periodically received feedback on their progress in learning the strategy. In Experiment 2 we also explored transfer (maintenance and generalization) of achievement outcomes. The process goal with progress feedback treatment had the greatest impact on achievement outcomes to include maintenance and generalization; the process goal without feedback condition resulted in some benefits compared with the product and general goal conditions. Self-efficacy was highly predictive of writing skill and strategy use. Suggestions for future research and implications for classroom practice are discussed. Article:The purpose of the present investigation was to explore the effects on children's achievement outcomes of process and product goals and goal progress feedback during writing instruction. The conceptual basis of this research was goal setting theory and research, which shows that goal setting promotes motivation and learning (Bandura, 1986;Locke & Latham, 1990). The effects of goals are not automatic, however, but rather depend on goal properties: specificity, proximity, difficulty. Goals that denote specific performance standards, are temporally close at hand, or are viewed as difficult but attainable, enhance performance better than goals that are general (e.g., "Do your best"), temporally distant, or perceived as very easy or very difficult, respectively (Schunk, 1990).
This study examined the executive functioning of 55 elementary school children with and without problems in written expression. Two groups reflecting children with and without significant writing problems were defined by an average primary trait rating across two separate narratives. The groups did not differ in terms of chronological age, ethnicity, gender, socioeconomic status, special education status, or presence of attention problems or receptive vocabulary capabilities; however, they did differ in reading decoding ability, and this variable was controlled for in all analyses. Dependent measures included tasks tapping an array of executive functions grouped conceptually in accordance with a model of executive functioning reflecting the following domains: initiate, sustain, set shifting, and inhibition/stopping. Analysis of covariance (ANCOVA) procedures revealed statistically significant group differences on the initiation and set shift domains, with the sustaining domain approaching significance. Children with writing problems performed more poorly in each of these domains, although the effect sizes were small. A multiple regression that employed these four factors and the reading decoding variable to predict the primary trait score from the written narratives revealed a statistically significant regression function; however, reading decoding contributed most of the unique variance to the writing outcome. These findings point out the importance of executive functions in the written language process for elementary school students, but highlight the need to examine other variables when studying elementary school-age children with written expression problems.
This article (a) discusses the assumptions underlying the use of rating scales, (b) describes the use of information available within the context of Rasch measurement that may be useful for optimizing rating scales, and (c) demonstrates the process in two studies. Participants in the first study were 330 fourth- and fifth-grade students. Participants provided responses to the Index of Self-Efficacy for Writing. Based on category counts, average measures, thresholds and category fit statistics, the responses on the original 10-point scale were better represented by a 4-point scale. The modified 4-point scale was given to a replication sample of 668 fourth- and fifth-grade students. The rating scale structure was found to be congruent with the results from the first study. In addition, the item fit statistics and item hierarchy indicated the writing self-efficacy construct to be stable across the two samples. Combined, these results provide evidence for the generalizability of the findings and hence utility of this scale for use with samples of respondents from the same population.
Issues surrounding the psychometric properties of writing assessments have received ongoing attention. However, the reliability estimates of scores derived from various holistic and analytical scoring strategies reported in the literature have relied on classical test theory (CT), which accounts for only a single source of variance within a given analysis. Generalizability theory (GT) is a more powerful and flexible strategy that allows for the simultaneous estimation of multiple sources of error variance to estimate the reliability of test scores. Using GT, two studies were conducted to investigate the impact of the number of raters and the type of decision (relative vs. absolute) on the reliability of writing scores. The results of both studies indicated that the reliability coefficients for writing scores decline as (a) the number of raters is reduced and (b) when absolute decisions rather than relative decisions are made.Writing assessment is one of the most common types of performancebased testing. The importance of the decisions made using test results from performance-based assessments in applied settings makes it paramount that researchers establish the reliability of scores resulting from such assessments. Professionals charged with providing evidence for the psychometric properties of writing tests need to move beyond reliability estimates derived
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.