A significant proportion of the electroencephalography (EEG) literature focuses on differences in historically pre-defined frequency bands in the power spectrum that are typically referred to as alpha, beta, gamma, theta and delta waves. Here, we review 184 EEG studies that report differences in frequency bands in the resting state condition (eyes open and closed) across a spectrum of psychiatric disorders including depression, attention deficit-hyperactivity disorder (ADHD), autism, addiction, bipolar disorder, anxiety, panic disorder, post-traumatic stress disorder (PTSD), obsessive compulsive disorder (OCD) and schizophrenia to determine patterns across disorders. Aggregating across all reported results we demonstrate that characteristic patterns of power change within specific frequency bands are not necessarily unique to any one disorder but show substantial overlap across disorders as well as variability within disorders. In particular, we show that the most dominant pattern of change, across several disorder types including ADHD, schizophrenia and OCD, is power increases across lower frequencies (delta and theta) and decreases across higher frequencies (alpha, beta and gamma). However, a considerable number of disorders, such as PTSD, addiction and autism show no dominant trend for spectral change in any direction. We report consistency and validation scores across the disorders and conditions showing that the dominant result across all disorders is typically only 2.2 times as likely to occur in the literature as alternate results, and typically with less than 250 study participants when summed across all studies reporting this result. Furthermore, the magnitudes of the results were infrequently reported and were typically small at between 20% and 30% and correlated weakly with symptom severity scores. Finally, we discuss the many methodological challenges and limitations relating to such frequency band analysis across the literature. These results caution any interpretation of results from studies that consider only one disorder in isolation, and for the overall potential of this approach for delivering valuable insights in the field of mental health.
Across the landscape of mental health research and diagnosis, there is a diverse range of questionnaires and interviews available for use by clinicians and researchers to determine patient treatment plans or investigate internal and external etiologies. Although individually, these tools have each been assessed for their validity and reliability, there is little research examining the consistency between them in terms of what symptoms they assess, and how they assess those symptoms. Here, we provide an analysis of 126 different questionnaires and interviews commonly used to diagnose and screen for 10 different disorder types including depression, anxiety, obsessive compulsive disorder (OCD), posttraumatic stress disorder (PTSD), attention deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD), addiction, bipolar disorder, eating disorder, and schizophrenia, as well as comparator questionnaires and interviews that offer an all-in-one cross-disorder assessment of mental health. We demonstrate substantial inconsistency in the inclusion and emphasis of symptoms assessed within disorders as well as considerable symptom overlap across disorder-specific tools. Within the same disorder, similarity scores across assessment tools ranged from 29% for assessment of bipolar disorder to a maximum of 58% for OCD. Furthermore, when looking across disorders, 60% of symptoms were assessed in at least half of all disorders illustrating the extensive overlap in symptom profiles between disorder-specific assessment tools. Biases in assessment toward emotional, cognitive, physical or behavioral symptoms were also observed, further adding to the heterogeneity across assessments. Analysis of other characteristics such as the time period over which symptoms were assessed, as well as whether there was a focus toward frequency, severity or duration of symptoms also varied substantially across assessment tools. The consequence of this inconsistent and heterogeneous assessment landscape is that it hinders clinical diagnosis and treatment and frustrates understanding of the social, environmental, and biological factors that contribute to mental health symptoms and disorders. Altogether, it underscores the need for standardized assessment tools that are more disorder agnostic and span the full spectrum of mental health symptoms to aid the understanding of underlying etiologies and the discovery of new treatments for psychiatric dysfunction.
Background Existing mental health assessment tools provide an incomplete picture of symptom experience and create ambiguity, bias, and inconsistency in mental health outcomes. Furthermore, by focusing on disorders and dysfunction, they do not allow a view of mental health and well-being across a general population. Objective This study aims to demonstrate the outcomes and validity of a new web-based assessment tool called the Mental Health Quotient (MHQ), which is designed for the general population. The MHQ covers the complete breadth of clinical mental health symptoms and also captures healthy mental functioning to provide a complete profile of an individual’s mental health from clinical to thriving. Methods The MHQ was developed based on the coding of symptoms assessed in 126 existing Diagnostic and Statistical Manual of Mental Disorders (DSM)–based psychiatric assessment tools as well as neuroscientific criteria laid out by Research Domain Criteria to arrive at a comprehensive set of semantically distinct mental health symptoms and attributes. These were formulated into questions on a 9-point scale with both positive and negative dimensions and developed into a web-based tool that takes approximately 14 min to complete. As its output, the assessment provides overall MHQ scores as well as subscores for 6 categories of mental health that distinguish clinical and at-risk groups from healthy populations based on a nonlinear scoring algorithm. MHQ items were also mapped to the DSM fifth edition (DSM-5), and clinical diagnostic criteria for 10 disorders were applied to the MHQ outcomes to cross-validate scores labeled at-risk and clinical. Initial data were collected from 1665 adult respondents to test the tool. Results Scores in the normal healthy range spanned from 0 to 200 for the overall MHQ, with an average score of approximately 100 (SD 45), and from 0 to 100 with average scores between 48 (SD 21) and 55 (SD 22) for subscores in each of the 6 mental health subcategories. Overall, 2.46% (41/1665) and 13.09% (218/1665) of respondents were classified as clinical and at-risk, respectively, with negative scores. Validation against DSM-5 diagnostic criteria showed that 95% (39/41) of those designated clinical were positive for at least one DSM-5–based disorder, whereas only 1.14% (16/1406) of those with a positive MHQ score met the diagnostic criteria for a mental health disorder. Conclusions The MHQ provides a fast, easy, and comprehensive way to assess population mental health and well-being; identify at-risk individuals and subgroups; and provide diagnosis-relevant information across 10 disorders.
Assessment of mental illness typically relies on a disorder classification system that is considered to be at odds with the vast disorder comorbidity and symptom heterogeneity that exists within and across patients. Patients with the same disorder diagnosis exhibit diverse symptom profiles and comorbidities creating numerous clinical and research challenges. Here we provide a quantitative analysis of the symptom heterogeneity and disorder comorbidity across a sample of 107,349 adult individuals (aged 18–85 years) from 8 English-speaking countries. Data were acquired using the Mental Health Quotient, an anonymous, online, self-report tool that comprehensively evaluates symptom profiles across 10 common mental health disorders. Dissimilarity of symptom profiles within and between disorders was then computed. We found a continuum of symptom prevalence rather than a clear separation of normal and disordered. While 58.7% of those with 5 or more clinically significant symptoms did not map to the diagnostic criteria of any of the 10 DSM-5 disorders studied, those with symptom profiles that mapped to at least one disorder had, on average, 20 clinically significant symptoms. Within this group, the heterogeneity of symptom profiles was almost as high within a disorder label as between 2 disorder labels and not separable from randomly selected groups of individuals with at least one of any of the 10 disorders. Overall, these results quantify the scale of misalignment between clinical symptom profiles and DSM-5 disorder labels and demonstrate that DSM-5 disorder criteria do not separate individuals from random when the complete mental health symptom profile of an individual is considered. Greater emphasis on empirical, disorder agnostic approaches to symptom profiling would help overcome existing challenges with heterogeneity and comorbidity, aiding clinical and research outcomes.
Background The Mental Health Quotient (MHQ) is an anonymous web-based assessment of mental health and well-being that comprehensively covers symptoms across 10 major psychiatric disorders, as well as positive elements of mental function. It uses a novel life impact scale and provides a score to the individual that places them on a spectrum from Distressed to Thriving along with a personal report that offers self-care recommendations. Since April 2020, the MHQ has been freely deployed as part of the Mental Health Million Project. Objective This paper demonstrates the reliability and validity of the MHQ, including the construct validity of the life impact scale, sample and test-retest reliability of the assessment, and criterion validation of the MHQ with respect to clinical burden and productivity loss. Methods Data were taken from the Mental Health Million open-access database (N=179,238) and included responses from English-speaking adults (aged≥18 years) from the United States, Canada, the United Kingdom, Ireland, Australia, New Zealand, South Africa, Singapore, India, and Nigeria collected during 2021. To assess sample reliability, random demographically matched samples (each 11,033/179,238, 6.16%) were compared within the same 6-month period. Test-retest reliability was determined using the subset of individuals who had taken the assessment twice ≥3 days apart (1907/179,238, 1.06%). To assess the construct validity of the life impact scale, additional questions were asked about the frequency and severity of an example symptom (feelings of sadness, distress, or hopelessness; 4247/179,238, 2.37%). To assess criterion validity, elements rated as having a highly negative life impact by a respondent (equivalent to experiencing the symptom ≥5 days a week) were mapped to clinical diagnostic criteria to calculate the clinical burden (174,618/179,238, 97.42%). In addition, MHQ scores were compared with the number of workdays missed or with reduced productivity in the past month (7625/179,238, 4.25%). Results Distinct samples collected during the same period had indistinguishable MHQ distributions and MHQ scores were correlated with r=0.84 between retakes within an 8- to 120-day period. Life impact ratings were correlated with frequency and severity of symptoms, with a clear linear relationship (R2>0.99). Furthermore, the aggregate MHQ scores were systematically related to both clinical burden and productivity. At one end of the scale, 89.08% (8986/10,087) of those in the Distressed category mapped to one or more disorders and had an average productivity loss of 15.2 (SD 11.2; SEM [standard error of measurement] 0.5) days per month. In contrast, at the other end of the scale, 0% (1/24,365) of those in the Thriving category mapped to any of the 10 disorders and had an average productivity loss of 1.3 (SD 3.6; SEM 0.1) days per month. Conclusions The MHQ is a valid and reliable assessment of mental health and well-being when delivered anonymously on the web.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.