Background: There is a need for fast, accessible, low-cost, and accurate diagnostic methods for early detection of cognitive decline. Dementia diagnoses are usually made years after symptom onset, missing a window of opportunity for early intervention. Objective: To evaluate the use of recorded voice features as proxies for cognitive function by using neuropsychological test measures and existing dementia diagnoses. Methods: This study analyzed 170 audio recordings, transcripts, and paired neuropsychological test results from 135 participants selected from the Framingham Heart Study (FHS), which includes 97 recordings of cognitively normal participants and 73 recordings of cognitively impaired participants. Acoustic and linguistic features of the voice samples were correlated with cognitive performance measures to verify their association. Results: Language and voice features, when combined with demographic variables, performed with an AUC of 0.942 (95% CI 0.929-0.983) in predicting cognitive status. Features with good predictive power included the acoustic features mean spectral slope in the 500-1500 Hz band, variation in the F2 bandwidth, and variation in the Mel-Frequency Cepstral Coefficient (MFCC) 1; the demographic features employment, education, and age; and the text features of number of words, number of compound words, number of unique nouns, and number of proper names. Conclusion: Several linguistic and acoustic biomarkers show correlations and predictive power with regard to neuropsychological testing results and cognitive impairment diagnoses, including dementia. This initial study paves the way for a follow-up comprehensive study incorporating the entire FHS cohort.
The identification of drug-drug interactions (DDIs) is important for patient safety; yet, compared to other pharmacovigilance work, a limited amount of research has been conducted in this space. Recent work has successfully applied a method of deriving distributed vector representations from structured biomedical knowledge, known as Embedding of Semantic Predications (ESP), to the problem of predicting individual drug side effects. In the current paper we extend this work by applying ESP to the problem of predicting polypharmacy side-effects for particular drug combinations, building on a recent reconceptualization of this problem as a network of drug nodes connected by side effect edges. We evaluate ESP embeddings derived from the resulting graph on a side-effect prediction task against a previously reported graph convolutional neural network approach, using the same data and evaluation methods. We demonstrate that ESP models perform better, while being faster to train, more re-usable, and significantly simpler.
The increasing adoption of message-based behavioral therapy enables new approaches to assessing mental health using linguistic analysis of patient-generated text. Word counting approaches have demonstrated utility for linguistic feature extraction, but deep learning methods hold additional promise given recent advances in this area. We evaluated the utility of emotion features extracted using a BERT-based model in comparison to emotions extracted using word counts as predictors of symptom severity in a large set of messages from text-based therapy sessions involving over 6,500 unique patients, accompanied by data from repeatedly administered symptom scale measurements. BERT-based emotion features explained more variance in regression models of symptom severity, and improved predictive modeling of scale-derived diagnostic categories. However, LIWC categories that are not directly related to emotions provided valuable and complementary information for modeling of symptom severity, indicating a role for both approaches in inferring the mental states underlying patient-generated language.
Background Behavioral activation (BA) is rooted in the behavioral theory of depression, which states that increased exposure to meaningful, rewarding activities is a critical factor in the treatment of depression. Assessing constructs relevant to BA currently requires the administration of standardized instruments, such as the Behavioral Activation for Depression Scale (BADS), which places a burden on patients and providers, among other potential limitations. Previous work has shown that depressed and nondepressed individuals may use language differently and that automated tools can detect these differences. The increasing use of online, chat-based mental health counseling presents an unparalleled resource for automated longitudinal linguistic analysis of patients with depression, with the potential to illuminate the role of reward exposure in recovery. Objective This work investigated how linguistic indicators of planning and participation in enjoyable activities identified in online, text-based counseling sessions relate to depression symptomatology over time. Methods Using distributional semantics methods applied to a large corpus of text-based online therapy sessions, we devised a set of novel BA-related categories for the Linguistic Inquiry and Word Count (LIWC) software package. We then analyzed the language used by 10,000 patients in online therapy chat logs for indicators of activation and other depression-related markers using LIWC. Results Despite their conceptual and operational differences, both previously established LIWC markers of depression and our novel linguistic indicators of activation were strongly associated with depression scores (Patient Health Questionnaire [PHQ]-9) and longitudinal patient trajectories. Emotional tone; pronoun rates; words related to sadness, health, and biology; and BA-related LIWC categories appear to be complementary, explaining more of the variance in the PHQ score together than they do independently. Conclusions This study enables further work in automated diagnosis and assessment of depression, the refinement of BA psychotherapeutic strategies, and the development of predictive models for decision support.
Background: Lung cancer is the most common cause of cancer-related death in the United States (US), with most patients diagnosed at later stages (3 or 4). While most patients are diagnosed following symptomatic presentation, no studies have compared symptoms and physical examination signs at or prior to diagnosis from electronic health records (EHR) in the United States (US). Objective: To identify symptoms and signs in patients prior to lung cancer diagnosis in EHR data. Study Design: Case-control study. Methods: We studied 698 primary lung cancer cases in adults diagnosed between January 1, 2012 and December 31, 2019, and 6,841 controls matched by age, sex, smoking status, and type of clinic. Coded and free-text data from the EHR were extracted from 2 years prior to diagnosis date for cases and index date for controls. Univariate and multivariate conditional logistic regression were used to identify symptoms and signs associated with lung cancer. Analyses were repeated excluding symptom data from 1, 3, 6, and 12 months before the diagnosis/index dates. Results: Eleven symptoms and signs recorded during the study period were associated with a significantly higher chance of being a lung cancer case in multivariate analyses. Of these, seven were significantly associated with lung cancer six months prior to diagnosis: hemoptysis (OR 3.2, 95%CI 1.9-5.3), cough (OR 3.1, 95%CI 2.4-4.0), chest crackles or wheeze (OR 3.1, 95%CI 2.3-4.1), bone pain (OR 2.7, 95%CI 2.1-3.6), back pain (OR 2.5, 95%CI 1.9-3.2), weight loss (OR 2.1, 95%CI 1.5-2.8) and fatigue (OR 1.6, 95%CI 1.3-2.1). Conclusions: Patients diagnosed with lung cancer appear to have symptoms and signs recorded in the EHR that distinguish them from similar matched patients in ambulatory care, often six months or more before their diagnosis. These findings suggest opportunities to improve the diagnostic process for lung cancer in the US.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.