BackgroundInterpretation of differences or changes in patient-reported outcome scores should not only consider statistical significance, but also clinical relevance. Accordingly, accurate determination of the minimally important difference (MID) is crucial to assess the effectiveness of health care interventions, as well as for sample size calculation. Several methods have been proposed to determine the MID. Our aim was to review the statistical methods used to determine MID in patient-reported outcome (PRO) questionnaires in cancer patients, focusing on the distribution- and anchor-based approaches and to present the variability of criteria used as well as possible limitations.MethodsWe performed a systematic search using PubMed. We searched for all cancer studies related to MID determination on a PRO questionnaire. Two reviewers independently screened titles and abstracts to identify relevant articles. Data were extracted from eligible articles using a predefined data collection form. Discrepancies were resolved by discussion and the involvement of a third reviewer.ResultsSixty-three articles were identified, of which 46 were retained for final analysis. Both distribution- and anchor-based approaches were used to assess the MID in 37 studies (80.4%). Different time points were used to apply the distribution-based method and the most frequently reported distribution was the 0.5 standard deviation at baseline. A change in a PRO external scale (N = 13, 30.2%) and performance status (N = 15, 34.9%) were the most frequently used anchors. The stability of the MID over time was rarely investigated and only 28.2% of studies used at least 3 assessment timepoints. The robustness of anchor-based MID was questionable in 37.2% of the studies where the minimal number of patients by anchor category was less than 20.ConclusionEfforts are needed to improve the quality of the methodology used for MID determination in PRO questionnaires used in oncology. In particular, increased attention to the sample size should be paid to guarantee reliable results. This could increase the use of these specific thresholds in future studies.Electronic supplementary materialThe online version of this article (10.1186/s12955-018-1055-z) contains supplementary material, which is available to authorized users.
Background Low physical activity is an important risk factor for common physical and mental disorders. Physical activity interventions delivered via smartphones can help users maintain and increase physical activity, but outcomes have been mixed. Purpose Here we assessed the effects of sending daily motivational and feedback text messages in a microrandomized clinical trial on changes in physical activity from one day to the next in a student population. Methods We included 93 participants who used a physical activity app, “DIAMANTE” for a period of 6 weeks. Every day, their phone pedometer passively tracked participants’ steps. They were microrandomized to receive different types of motivational messages, based on a cognitive-behavioral framework, and feedback on their steps. We used generalized estimation equation models to test the effectiveness of feedback and motivational messages on changes in steps from one day to the next. Results Sending any versus no text message initially resulted in an increase in daily steps (729 steps, p = .012), but this effect decreased over time. A multivariate analysis evaluating each text message category separately showed that the initial positive effect was driven by the motivational messages though the effect was small and trend-wise significant (717 steps; p = .083), but not the feedback messages (−276 steps, p = .4). Conclusion Sending motivational physical activity text messages based on a cognitive-behavioral framework may have a positive effect on increasing steps, but this decreases with time. Further work is needed to examine using personalization and contextualization to improve the efficacy of text-messaging interventions on physical activity outcomes. ClinicalTrials.gov Identifier NCT04440553.
Objective Providing behavioral health interventions via smartphones allows these interventions to be adapted to the changing behavior, preferences, and needs of individuals. This can be achieved through reinforcement learning (RL), a sub-area of machine learning. However, many challenges could affect the effectiveness of these algorithms in the real world. We provide guidelines for decision-making. Materials and Methods Using thematic analysis, we describe challenges, considerations, and solutions for algorithm design decisions in a collaboration between health services researchers, clinicians, and data scientists. We use the design process of an RL algorithm for a mobile health study “DIAMANTE” for increasing physical activity in underserved patients with diabetes and depression. Over the 1.5-year project, we kept track of the research process using collaborative cloud Google Documents, Whatsapp messenger, and video teleconferencing. We discussed, categorized, and coded critical challenges. We grouped challenges to create thematic topic process domains. Results Nine challenges emerged, which we divided into 3 major themes: 1. Choosing the model for decision-making, including appropriate contextual and reward variables; 2. Data handling/collection, such as how to deal with missing or incorrect data in real-time; 3. Weighing the algorithm performance vs effectiveness/implementation in real-world settings. Conclusion The creation of effective behavioral health interventions does not depend only on final algorithm performance. Many decisions in the real world are necessary to formulate the design of problem parameters to which an algorithm is applied. Researchers must document and evaulate these considerations and decisions before and during the intervention period, to increase transparency, accountability, and reproducibility. Trial Registration clinicaltrials.gov, NCT03490253.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.