BackgroundAccurate reporting of patient symptoms is critical for diagnosis and therapeutic monitoring in psychiatry. Smartphones offer an accessible, low-cost means to collect patient symptoms in real time and aid in care.ObjectiveTo investigate adherence among psychiatric outpatients diagnosed with major depressive disorder in utilizing their personal smartphones to run a custom app to monitor Patient Health Questionnaire-9 (PHQ-9) depression symptoms, as well as to examine the correlation of these scores to traditionally administered (paper-and-pencil) PHQ-9 scores.MethodsA total of 13 patients with major depressive disorder, referred by their clinicians, received standard outpatient treatment and, in addition, utilized their personal smartphones to run the study app to monitor their symptoms. Subjects downloaded and used the Mindful Moods app on their personal smartphone to complete up to three survey sessions per day, during which a randomized subset of PHQ-9 symptoms of major depressive disorder were assessed on a Likert scale. The study lasted 29 or 30 days without additional follow-up. Outcome measures included adherence, measured by the percentage of completed survey sessions, and estimates of daily PHQ-9 scores collected from the smartphone app, as well as from the traditionally administered PHQ-9.ResultsOverall adherence was 77.78% (903/1161) and varied with time of day. PHQ-9 estimates collected from the app strongly correlated (r=.84) with traditionally administered PHQ-9 scores, but app-collected scores were 3.02 (SD 2.25) points higher on average. More subjects reported suicidal ideation using the app than they did on the traditionally administered PHQ-9.ConclusionsPatients with major depressive disorder are able to utilize an app on their personal smartphones to self-assess their symptoms of major depressive disorder with high levels of adherence. These app-collected results correlate with the traditionally administered PHQ-9. Scores recorded from the app may potentially be more sensitive and better able to capture suicidality than the traditional PHQ-9.
CONTEXT: Screening children for social determinants of health (SDOHs) has gained attention in recent years, but there is a deficit in understanding the present state of the science.OBJECTIVE: To systematically review SDOH screening tools used with children, examine their psychometric properties, and evaluate how they detect early indicators of risk and inform care.
BackgroundThere are over 165,000 mHealth apps currently available to patients, but few have undergone an external quality review. Furthermore, no standardized review method exists, and little has been done to examine the consistency of the evaluation systems themselves.ObjectiveWe sought to determine which measures for evaluating the quality of mHealth apps have the greatest interrater reliability.MethodsWe identified 22 measures for evaluating the quality of apps from the literature. A panel of 6 reviewers reviewed the top 10 depression apps and 10 smoking cessation apps from the Apple iTunes App Store on these measures. Krippendorff’s alpha was calculated for each of the measures and reported by app category and in aggregate.ResultsThe measure for interactiveness and feedback was found to have the greatest overall interrater reliability (alpha=.69). Presence of password protection (alpha=.65), whether the app was uploaded by a health care agency (alpha=.63), the number of consumer ratings (alpha=.59), and several other measures had moderate interrater reliability (alphas>.5). There was the least agreement over whether apps had errors or performance issues (alpha=.15), stated advertising policies (alpha=.16), and were easy to use (alpha=.18). There were substantial differences in the interrater reliabilities of a number of measures when they were applied to depression versus smoking apps.ConclusionsWe found wide variation in the interrater reliability of measures used to evaluate apps, and some measures are more robust across categories of apps than others. The measures with the highest degree of interrater reliability tended to be those that involved the least rater discretion. Clinical quality measures such as effectiveness, ease of use, and performance had relatively poor interrater reliability. Subsequent research is needed to determine consistent means for evaluating the performance of apps. Patients and clinicians should consider conducting their own assessments of apps, in conjunction with evaluating information from reviews.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.