Most individuals who experience aphasia after a stroke recover to some extent, with the majority of gains taking place in the first year. The nature and timecourse of this recovery process is only partially understood, especially its dependence on lesion location and extent, which are the most important determinants of outcome. The aim of this study was to provide a comprehensive description of patterns of recovery from aphasia in the first year after stroke. We recruited 334 patients with acute left hemisphere supratentorial ischemic or hemorrhagic stroke, and evaluated their speech and language function within 5 days using the Quick Aphasia Battery. At this initial timepoint, 218 patients presented with aphasia. Individuals with aphasia were followed longitudinally, with follow-up evaluations of speech and language at 1 month, 3 months, and 1 year post stroke, wherever possible. Lesions were manually delineated based on acute clinical MRI or CT imaging. Patients with and without aphasia were divided into 13 groups of individuals with similar, commonly occurring patterns of brain damage. Trajectories of recovery were then investigated as a function of group (i.e., lesion location and extent) and speech/language domain (overall language function, word comprehension, sentence comprehension, word finding, grammatical construction, phonological encoding, speech motor programming, speech motor execution, and reading). We found that aphasia is dynamic, multidimensional, and gradated, with little explanatory role for aphasia subtypes or binary concepts such as fluency. Patients with circumscribed frontal lesions recovered well, consistent with some previous observations. More surprisingly, most patients with larger frontal lesions extending into the parietal or temporal lobes also recovered well, as did patients with relatively circumscribed temporal, temporoparietal, or parietal lesions. Persistent moderate or severe deficits were common only in patients with extensive damage throughout the middle cerebral artery distribution, or extensive temporoparietal damage. There were striking differences between speech/language domains in their rates of recovery and their relationships to overall language function, suggesting that specific domains differ in the extent to which they are redundantly represented throughout the language network, as opposed to depending on specialized cortical substrates. Our findings have an immediate clinical application in that they will enable clinicians to estimate the likely course of recovery for individual patients, as well as the uncertainty of these predictions, based on acutely observable neurological factors.
Purpose Auditory-perceptual assessment, in which trained listeners rate a large number of perceptual features of speech samples, is the gold standard for the differential diagnosis of motor speech disorders. The goal of this study was to investigate the feasibility of applying a similar, formalized auditory-perceptual approach to the assessment of language deficits in connected speech samples from individuals with aphasia. Method Twenty-seven common features of connected speech in aphasia were defined, each of which was rated on a 5-point scale. Three experienced researchers evaluated 24 connected speech samples from the AphasiaBank database, and 12 student clinicians evaluated subsets of 8 speech samples each. We calculated interrater reliability for each group of raters and investigated the validity of the auditory-perceptual approach by comparing feature ratings to related quantitative measures derived from transcripts and clinical measures, and by examining patterns of feature co-occurrence. Results Most features were rated with good-to-excellent interrater reliability by researchers and student clinicians. Most features demonstrated strong concurrent validity with respect to quantitative connected speech measures computed from AphasiaBank transcripts and/or clinical aphasia battery subscores. Factor analysis showed that 4 underlying factors, which we labeled Paraphasia, Logopenia, Agrammatism, and Motor Speech, accounted for 79% of the variance in connected speech profiles. Examination of individual patients' factor scores revealed striking diversity among individuals classified with a given aphasia type. Conclusion Auditory-perceptual rating of connected speech in aphasia shows potential to be a comprehensive, efficient, reliable, and valid approach for characterizing connected speech in aphasia.
Purpose: ParAlg (Paraphasia Algorithms) is a software that automatically categorizes a person with aphasia's naming error (paraphasia) in relation to its intended target on a picture-naming test. These classifications (based on lexicality as well as semantic, phonological, and morphological similarity to the target) are important for characterizing an individual's word-finding deficits or anomia. In this study, we applied a modern language model called BERT (Bidirectional Encoder Representations from Transformers) as a semantic classifier and evaluated its performance against ParAlg's original word2vec model. Method: We used a set of 11,999 paraphasias produced during the Philadelphia Naming Test. We trained ParAlg with word2vec or BERT and compared their performance to humans. Finally, we evaluated BERT's performance in terms of word-sense selection and conducted an item-level discrepancy analysis to identify which aspects of semantic similarity are most challenging to classify. Results: Compared with word2vec, BERT qualitatively reduced word-sense issues and quantitatively reduced semantic classification errors by almost half. A large percentage of errors were attributable to semantic ambiguity. Of the possible semantic similarity subtypes, responses that were associated with or category coordinates of the intended target were most likely to be misclassified by both models and humans alike. Conclusions: BERT outperforms word2vec as a semantic classifier, partially due to its superior handling of polysemy. This work is an important step for further establishing ParAlg as an accurate assessment tool.
Purpose: A preliminary version of a paraphasia classification algorithm (henceforth called ParAlg) has previously been shown to be a viable method for coding picture naming errors. The purpose of this study is to present an updated version of ParAlg, which uses multinomial classification, and comprehensively evaluate its performance when using two different forms of transcribed input. Method: A subset of 11,999 archival responses produced on the Philadelphia Naming Test were classified into six cardinal paraphasia types using ParAlg under two transcription configurations: (a) using phonemic transcriptions for responses exclusively ( phonemic-only ) and (b) using phonemic transcriptions for nonlexical responses and orthographic transcriptions for lexical responses ( orthographic-lexical ). Agreement was quantified by comparing ParAlg-generated paraphasia codes between configurations and relative to human-annotated codes using four metrics (positive predictive value, sensitivity, specificity, and F1 score). An item-level qualitative analysis of misclassifications under the best performing configuration was also completed to identify the source and nature of coding discrepancies. Results: Agreement between ParAlg-generated and human-annotated codes was high, although the orthographic-lexical configuration outperformed phonemic-only (weighted-average F1 scores of .78 and .87, respectively). A qualitative analysis of the orthographic-lexical configuration revealed a mix of human- and ParAlg-related misclassifications, the former of which were related primarily to phonological similarity judgments whereas the latter were due to semantic similarity assignment. Conclusions: ParAlg is an accurate and efficient alternative to manual scoring of paraphasias, particularly when lexical responses are orthographically transcribed. With further development, it has the potential to be a useful software application for anomia assessment. Supplemental Material: https://doi.org/10.23641/asha.22087763
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.