Two experiments examined reliability and classification accuracy of a narration-based dynamic assessment task.Purpose-The first experiment evaluated whether parallel results were obtained from stories created in response to 2 different wordless picture books. If so, the tasks and measures would be appropriate for assessing pretest and posttest change within a dynamic assessment format. The second experiment evaluated the extent to which children with language impairments performed differently than typically developing controls on dynamic assessment of narrative language.Method-In the first experiment, 58 1st-and 2nd-grade children told 2 stories about wordless picture books. Stories were rated on macrostructural and microstructural aspects of language form and content, and the ratings were subjected to reliability analyses. In the second experiment, 71 children participated in dynamic assessment. There were 3 phases: a pretest phase, in which children created a story that corresponded to 1 of the wordless picture books from Experiment 1; a teaching phase, in which children attended 2 short mediation sessions that focused on storytelling ability; and a posttest phase, in which children created a story that corresponded to a second wordless picture book from Experiment 1. Analyses compared the pretest and posttest stories that were told by 2 groups of children who received mediated learning (typical and language impaired groups) and a no-treatment control group of typically developing children from Experiment 1.Results-The results of the first experiment indicated that the narrative measures applied to stories about 2 different wordless picture books had good internal consistency. In Experiment 2, typically developing children who received mediated learning demonstrated a greater amount of pretest to posttest change than children in the language impaired and control groups. Classification analysis indicated better specificity and sensitivity values for measures of response to intervention (modifiability) and posttest storytelling than for measures of pretest storytelling. Observation of modifiability was the single best indicator of language impairment. Posttest measures and modifiability together yielded no misclassifications. Conclusion-The first experiment supported the use of 2 wordless picture books as stimulus materials for collecting narratives before and after mediation within a dynamic assessment paradigm. The second experiment supported the use of dynamic assessment for accurately identifying language impairments in school-age children. Traditionally, speech-language pathologists diagnose children with language impairments (LI) by comparing their performance on standardized tests with the performance of their same-age peers (Tomblin, Records, & Zhang, 1996). There is growing dissatisfaction with single-time assessment because of the potential for measurement error (Bracken, 1988;McCauley & Swisher, 1984a, 1984bPlante & Vance, 1994) as well as the interfering effects of potential cultural bias (De...
Current input-output appears to be a better predictor of phonological accuracy than age of first English exposure for early bilinguals, consistent with findings on the effect of language experience on performance in other language domains in bilingual children. Although greater current input-output in a language predicts higher accuracy in that language, this interacts with sound complexity. The results highlight the utility of the EML classification in assessing bilingual children's phonology. The relationships of intelligibility ratings with current input-output and sound accuracy can shed light on the process of referral of bilingual children for speech and language services.
The clinical fellowship (CF) is completed by speech-language pathologists (SLPs) after graduating with a master's degree. The clinical fellow is supervised by a mentoring SLP who meets the qualifications set forth by the American Speech-Language-Hearing Association (ASHA). The current study focused on differences and similarities in expectations during the CF by the mentoring SLP and clinical fellow. Participants were asked about knowledge and skills for which clinical fellows are evaluated at the end of the CF in the areas of assessment and treatment. Clinical fellows wanted more help with assessment and less assistance with treatment. Mentoring SLPs felt assistance was needed for assessment and treatment. A model is proposed at the end to aid in implementing the recommendations based on the results.
Purpose: The majority of supervision literature has focused on the supervision of graduate students. The purpose of this study was to examine the preparation of and explore the thinking styles of mentoring speech-language pathologists (SLPs) of Speech-Language Pathology graduates who are completing their clinical fellowship. Method: A questionnaire was sent to 1626 SLPs by email who reported working in school settings and 102 participants responded. They were asked questions about demographics, caseloads, supervisory experiences, knowledge of the American Speech-Language-Hearing Association (ASHA, 2008) standards for supervision, and thinking styles. Forty of the participants reported on the number of years of experience they had with supervision and reported mentoring a clinical fellow in the past 3 years. These 40 participants were divided into two groups: mentoring SLPs with less experience (LEM; 1–5 years) and mentoring SLPs with more experience (MEM; 6 or more years). Results: The LEM and MEM groups demonstrated different patterns in meeting and contacting their Clinical Fellows (CF) and in participating in training activities for mentoring SLPs. There were also differences in the thinking styles of the groups. The MEM group rated themselves as having preferences for tasks that allowed them to work on one thing at a time as well as tasks that had a more global focus more than those in the LEM group. Conclusions: Differences in thinking styles may affect the style of supervision and mentoring. These relationships warrant further examination.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.