Speech impairment analysis and processing technologies have evolved substantially in recent years, and the use of voice as a biomarker has gained popularity. We have developed an approach for clinical speech signal processing to demonstrate the promise of deep learning-driven voice analysis as a screening tool for Parkinson’s Disease (PD), the world’s second most prevalent neurodegenerative disease. Detecting Parkinson’s disease symptoms typically involves an evaluation by a movement disorder expert, which can be difficult to get and yield varied findings. A vocal digital biomarker might supplement the time-consuming traditional manual examination by recognizing and evaluating symptoms that characterize voice quality and level of deterioration. We present a deep learning based, custom U-lossian model for PD assessment and recognition. The study’s goal was to discover anomalies in the PD-affected voice and develop an automated screening method that can discriminate between the voices of PD patients and healthy volunteers while also providing a voice quality score. The classification accuracy was evaluated on two speech corpora (Italian PVS and own Lithuanian PD voice dataset) and we have found the result to be medically appropriate, with values of 0.8964 and 0.7949, confirming the proposed model’s high generalizability.
Laryngeal carcinoma is the most common malignant tumor of the upper respiratory tract. Total laryngectomy provides complete and permanent detachment of the upper and lower airways that causes the loss of voice, leading to a patient’s inability to verbally communicate in the postoperative period. This paper aims to exploit modern areas of deep learning research to objectively classify, extract and measure the substitution voicing after laryngeal oncosurgery from the audio signal. We propose using well-known convolutional neural networks (CNNs) applied for image classification for the analysis of voice audio signal. Our approach takes an input of Mel-frequency spectrogram (MFCC) as an input of deep neural network architecture. A database of digital speech recordings of 367 male subjects (279 normal speech samples and 88 pathological speech samples) was used. Our approach has shown the best true-positive rate of any of the compared state-of-the-art approaches, achieving an overall accuracy of 89.47%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.