Purpose: Treatment bene t as assessed using clinical outcome assessments (COAs), is a key endpoint in many clinical trials at both the individual and group level. Anchor-based methods can aid interpretation of COA change scores beyond statistical signi cance, and help derive a meaningful change threshold (MCT). However, evidence-based guidance on the selection of appropriately related anchors is lacking. Methods: A simulation was conducted which varied sample size, change score variability and anchor correlation strength to assess the impact of these variables on recovering the true simulated MCT at both the individual and group-level. At the individual-level, Receiver Operating Characteristic (ROC) curves and Predictive Modelling (PM) anchor analyses were conducted. At the group-level, group means of the 'notimproved' and 'improved' groups were compared.Results: Sample sizes, change score variability and magnitude of anchor correlation affected accuracy of the estimated MCT. At the individual-level, ROC curves were less accurate than PM methods at recovering the true MCT. For both methods, smaller samples led to higher variability in the returned MCT, but higher variability still using ROC. Anchors with weaker correlations with COA change scores had increased variability in the estimated MCT. An anchor correlation of 0.50-0.60 identi ed a true MCT cut-point under certain conditions using ROC. However, anchor correlations as low as 0.30 were appropriate when using PM under certain conditions. At the group-level, the MCT was consistently underestimated regardless of the anchor correlation.Conclusion: Findings show that the chosen method, sample size and variability in change scores in uence the necessary anchor correlation strength when identifying a true individual-level MCT. Often, this needs to be higher than the commonly accepted threshold of 0.30. Stronger correlations than 0.30 are required at the group-level, but a speci c recommendation is not provided. Results can be used to assist researchers selecting and assessing the quality of anchors.
Purpose: Treatment benefit as assessed using clinical outcome assessments (COAs), is a key endpoint in many clinical trials at both the individual and group level. Anchor-based methods can aid interpretation of COA change scores beyond statistical significance, and help derive a meaningful change threshold (MCT). However, evidence-based guidance on the selection of appropriately related anchors is lacking. Methods: A simulation was conducted which varied sample size, change score variability and anchor correlation strength to assess the impact of these variables on recovering the true simulated MCT at both the individual and group-level. At the individual-level, Receiver Operating Characteristic (ROC) curves and Predictive Modelling (PM) anchor analyses were conducted. At the group-level, group means of the ‘not-improved’ and ‘improved’ groups were compared. Results: Sample sizes, change score variability and magnitude of anchor correlation affected accuracy of the estimated MCT. At the individual-level, ROC curves were less accurate than PM methods at recovering the true MCT. For both methods, smaller samples led to higher variability in the returned MCT, but higher variability still using ROC. Anchors with weaker correlations with COA change scores had increased variability in the estimated MCT. An anchor correlation of 0.50-0.60 identified a true MCT cut-point under certain conditions using ROC. However, anchor correlations as low as 0.30 were appropriate when using PM under certain conditions. At the group-level, the MCT was consistently underestimated regardless of the anchor correlation. Conclusion: Findings show that the chosen method, sample size and variability in change scores influence the necessary anchor correlation strength when identifying a true individual-level MCT. Often, this needs to be higher than the commonly accepted threshold of 0.30. Stronger correlations than 0.30 are required at the group-level, but a specific recommendation is not provided. Results can be used to assist researchers selecting and assessing the quality of anchors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.