Objective To develop a set of quality criteria for patient decision support technologies (decision aids). Design and setting Two stage web based Delphi process using online rating process to enable international collaboration. Participants Individuals from four stakeholder groups (researchers, practitioners, patients, policy makers) representing 14 countries reviewed evidence summaries and rated the importance of 80 criteria in 12 quality domains on a 1 to 9 scale. Second round participants received feedback from the first round and repeated their assessment of the 80 criteria plus three new ones. Main outcome measure Aggregate ratings for each criterion calculated using medians weighted to compensate for different numbers in stakeholder groups; criteria rated between 7 and 9 were retained. Results 212 nominated people were invited to participate. Of those invited, 122 participated in the first round (77 researchers, 21 patients, 10 practitioners, 14 policy makers); 104/122 (85%) participated in the second round. 74 of 83 criteria were retained in the following domains: systematic development process (9/9 criteria); providing information about options (13/13); presenting probabilities (11/13); clarifying and expressing values (3/3); using patient stories (2/5); guiding/coaching (3/5); disclosing conflicts of interest (5/5); providing internet access (6/6); balanced presentation of options (3/3); using plain language (4/6); basing information on up to date evidence (7/7); and establishing effectiveness (8/8). Conclusions Criteria were given the highest ratings where evidence existed, and these were retained. Gaps in research were highlighted. Developers, users, and purchasers of patient decision aids now have a checklist for appraising quality. An instrument for measuring quality of decision aids is being developed.
ObjectivesTo describe the development, validation and inter-rater reliability of an instrument to measure the quality of patient decision support technologies (decision aids).DesignScale development study, involving construct, item and scale development, validation and reliability testing.SettingThere has been increasing use of decision support technologies – adjuncts to the discussions clinicians have with patients about difficult decisions. A global interest in developing these interventions exists among both for-profit and not-for-profit organisations. It is therefore essential to have internationally accepted standards to assess the quality of their development, process, content, potential bias and method of field testing and evaluation.MethodsScale development study, involving construct, item and scale development, validation and reliability testing.ParticipantsTwenty-five researcher-members of the International Patient Decision Aid Standards Collaboration worked together to develop the instrument (IPDASi). In the fourth Stage (reliability study), eight raters assessed thirty randomly selected decision support technologies.ResultsIPDASi measures quality in 10 dimensions, using 47 items, and provides an overall quality score (scaled from 0 to 100) for each intervention. Overall IPDASi scores ranged from 33 to 82 across the decision support technologies sampled (n = 30), enabling discrimination. The inter-rater intraclass correlation for the overall quality score was 0.80. Correlations of dimension scores with the overall score were all positive (0.31 to 0.68). Cronbach's alpha values for the 8 raters ranged from 0.72 to 0.93. Cronbach's alphas based on the dimension means ranged from 0.50 to 0.81, indicating that the dimensions, although well correlated, measure different aspects of decision support technology quality. A short version (19 items) was also developed that had very similar mean scores to IPDASi and high correlation between short score and overall score 0.87 (CI 0.79 to 0.92).ConclusionsThis work demonstrates that IPDASi has the ability to assess the quality of decision support technologies. The existing IPDASi provides an assessment of the quality of a DST's components and will be used as a tool to provide formative advice to DSTs developers and summative assessments for those who want to compare their tools against an existing benchmark.
Although several clear definitions of shared decision making have been proposed, they are cited by only about a third of the papers reviewed. In the other papers, authors refer to the term without specifying or citing a definition or use the term inconsistently with their definition. This is a problem because having a clear definition of the concept and following this definition are essential to guide and focus research. Authors should use the term consistently with the identified definition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.