In nonpsychotic MDD outpatients without overt cognitive impairment, clinician assessment of depression severity using either the QIDS-C16 or HRSD17 may be successfully replaced by either the self-report or IVR version of the QIDS.
Illustrates how categorization spuriously influences apparent dimensionality inferred from (a) principal components (PC), (b) exploratory maximum likelihood (EML) analysis, and (c) I.ISREL. Simulated continuous, parallel, unifactor "scores," of differing reliability, were categorized in various ways to creat "items." All forms of categorization spuriously suggested multidimensionality. PCbased indices were more misleading with less reliable data; the reverse was true with inferential (EML and LISREL) indices. Varying item "splits" to create item distribution differences further enhanced these spurious effects. Likewise, multicategory (Likert-type) items were more likely to yield artifacts than dichotomous items using inferential criteria even though the multicategory data were more reliable. Criteria for dimensionality applicable to continuous (scale-level) data are therefore inappropriate for discrete (item-level) data.The authors are grateful to Calvin P. Carbin, James R. Erickson, and two anonymous reviewers for their valuable comments.
The 17-item Hamilton Rating Scale for Depression (HRSD 17 ) and the Montgomery Äsberg Depression Rating Scale (MADRS) are two widely used clinicianrated symptom scales. A 6-item version of the HRSD (HRSD 6 ) was created by Bech to address the psychometric limitations of the HRSD 17 . The psychometric properties of these measures were compared using classical test theory (CTT) and item response theory (IRT) methods. IRT methods were used to equate total scores on any two scales. Data from two distinctly different outpatient studies of nonpsychotic major depression: a 12-month study of highly treatment-resistant patients (n=233) and an 8-week acute phase drug treatment trial (n=985) were used for robustness of results.MADRS and HRSD 6 items generally contributed more to the measurement of depression than HRSD 17 items as shown by higher item-total correlations and higher IRT slope parameters. The MADRS and HRSD 6 were unifactorial while the HRSD 17 contained 2 factors. The MADRS showed about twice the precision in estimating depression as either the HRSD 17 or HRSD 6 for average severity of depression. An HRSD 17 of 7 corresponded to an 8 or 9 on the MADRS and 4 on the HRSD 6 .The MADRS would be superior to the HRSD 17 in the conduct of clinical trials.
Objective: The aim of this study was to present the reliability and validity of the Children's Depression Rating Scale-Revised (CDRS-R) in the adolescent age group. Method: Adolescents with symptoms of depression were assessed using the CDRS-R and global severity and functioning scales at screening, baseline, and after 12 weeks of fluoxetine treatment. Global improvement was also assessed at week 12 (or exit). Reliability and validity were analyzed using Classical Test Theory (item-total correlations and internal consistency) and correlations between the CDRS-R and other outcomes. Results: Adolescents (n ¼ 145) were evaluated at screening; 113 (77.9%) met criteria for major depressive disorder, 8 (5.5%) had subthreshold depressive symptoms, and 24 (16.6%) had minimal depressive symptoms. Ninety-four adolescents had a baseline visit after 1 week, and 88 were treated with fluoxetine. Internal consistency for the CDRS-R was good at all three visits (screening: 0.79; baseline: 0.74; exit: 0.92), and total score was highly correlated with global severity (r ¼ 0.87, 0.80, and 0.93; p < 0.01). Only exit CDRS-R score was significantly correlated with global functioning (Children's Global Assessment Scale; r ¼ À0.77; p < 0.01). Reductions on the CDRS-R total score were highly correlated with improvement scores at exit (Clinical Global Impressions-Improvement; r ¼ À0.83; p < 0.01).Conclusions: The results demonstrate good reliability and validity in adolescents with depression.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.