Background AphasiaBank is a computerized database of interviews between persons with aphasia (PWAs) and clinicians. By February 2011, the database had grown to include 145 PWAs and 126 controls from 12 sites across the United States. The data and related analysis programs are available free over the web. Aims The overall goal of AphasiaBank is the construction of a system for accumulating and sharing data on language usage by PWAs. To achieve this goal, we have developed a standard elicitation protocol and systematic automatic and manual methods for transcription, coding, and analysis. Methods & Procedures We present sample analyses of transcripts from the retelling of the Cinderella story. These analyses illustrate the application of our methods for the study of phonological, lexical, semantic, morphological, syntactic, temporal, prosodic, gestural, and discourse features. Main Contribution AphasiaBank will allow researchers access to a large, shared database that can facilitate hypothesis testing and increase methodological replicability, precision, and transparency. Conclusions AphasiaBank will provide researchers with an important new tool in the study of aphasia.
A distinguishing feature of Broca's aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect 'speech entrainment' and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca's aphasia. In Experiment 1, 13 patients with Broca's aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca's area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production. Behavioural and functional magnetic resonance imaging data were collected before and after the treatment phase. Patients were able to produce a greater variety of words with and without speech entrainment at 1 and 6 weeks after training. Treatment-related decrease in cortical activation associated with speech entrainment was found in areas of the left posterior-inferior parietal lobe. We conclude that speech entrainment allows patients with Broca's aphasia to double their speech output compared with spontaneous speech. Neuroimaging results suggest that speech entrainment allows patients to produce fluent speech by providing an external gating mechanism that yokes a ventral language network that encodes conceptual aspects of speech. Preliminary results suggest that training with speech entrainment improves speech production in Broca's aphasia providing a potential therapeutic method for a disorder that has been shown to be particularly resistant to treatment.
Background AphasiaBank is a collaborative project whose goal is to develop an archival database of the discourse of individuals with aphasia. Along with databases on first language acquisition, classroom discourse, second language acquisition, and other topics, it forms a component of the general TalkBank database. It uses tools from the wider system that are further adapted to the particular goal of studying language use in aphasia. Aims The goal of this paper is to illustrate how TalkBank analytic tools can be applied to AphasiaBank data. Methods & Procedures Both aphasic (n = 24) and non-aphasic (n = 25) participants completed a 1-hour standardised videotaped data elicitation protocol. These sessions were transcribed and tagged automatically for part of speech. One component of the larger protocol was the telling of the Cinderella story. For these narratives we compared lexical diversity across the groups and computed the top 10 nouns and verbs across both groups. We then examined the profiles for two participants in greater detail. Conclusions Using these tools we showed that, in a story-retelling task, aphasic speakers had a marked reduction in lexical diversity and a greater use of light verbs. For example, aphasic speakers often substituted “girl” for “stepsister” and “go” for “disappear”. These findings illustrate how it is possible to use TalkBank tools to analyse AphasiaBank data.
Building on the success of the ADReSS Challenge at Interspeech 2020, which attracted the participation of 34 teams from across the world, the ADReSSo Challenge targets three difficult automatic prediction problems of societal and medical relevance, namely: detection of Alzheimer's Dementia, inference of cognitive testing scores, and prediction of cognitive decline. This paper presents these prediction tasks in detail, describes the datasets used, and reports the results of the baseline classification and regression models we developed for each task. A combination of acoustic and linguistic features extracted directly from audio recordings, without human intervention, yielded a baseline accuracy of 78.87% for the AD classification task, an MMSE prediction root mean squared (RMSE) error of 5.28, and 68.75% accuracy for the cognitive decline prediction task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.