2005
DOI: 10.1016/j.healthpol.2004.05.004
|View full text |Cite
|
Sign up to set email alerts
|

The consistency of panelists’ appropriateness ratings: do experts produce clinically logical scores for rectal cancer treatment?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2006
2006
2015
2015

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 24 publications
0
8
0
Order By: Relevance
“…It has been demonstrated that appropriateness guidelines developed using this method are reproducible, 10 are consistent clinically, 11 and are correlated with clinical outcomes. 12 Oncology applications have included breast cancer, 13 melanoma, 14 colorectal cancer, 9,15 and hematologic malignancies.…”
Section: Methodsmentioning
confidence: 94%
“…It has been demonstrated that appropriateness guidelines developed using this method are reproducible, 10 are consistent clinically, 11 and are correlated with clinical outcomes. 12 Oncology applications have included breast cancer, 13 melanoma, 14 colorectal cancer, 9,15 and hematologic malignancies.…”
Section: Methodsmentioning
confidence: 94%
“…This method, developed in the mid-1980s by the RAND Corporation and the School of Medicine of the University of California at Los Angeles (UCLA), combines the best available scientific evidence with the practical experience from experts in the field to yield a statement regarding the appropriateness or inappropriateness of medical and surgical procedures. It has been verified that appropriateness guidelines developed by means of RAM are reproducible [8], clinically consistent [9], and correlated with clinical outcomes [10]. The RAM has different stages: 1) systematic review of the available scientific literature on the procedure to be rated; 2) development of a list of clinical scenarios which categorise patients likely to be encountered in clinical practice for the procedure in question in terms of their specific symptoms and signs, medical history and test results; 3) selection and setting of an expert panel; and 4) rating by the experts the benefit-to-harm ratio of the procedure for each clinical scenario, following a two-round modified Delphi technique [7].…”
Section: Methodsmentioning
confidence: 76%
“…Each panelist was provided with his or her first round own ratings together with the frequency distribution of all the experts' responses. For each clinical scenario-treatment combination, the median score was given, interpreted as appropriate [7][8][9], uncertain [4][5][6] or inappropriate [1][2][3]. In addition, there was an assessment of agreement/dispersion of views using the following definitions: (1) Agreement: no more than two panelists rate the indication outside the 3-point region [1-3, 4-6, 7-9] containing the median, (2) Disagreement: at least two panelists rated the indication in the 1 -3 region, and at least two rated it in the 7 -9 region, and (3) Indeterminate: covers all other eventualities.…”
Section: Methodsmentioning
confidence: 99%
“…If true, these types of clinical tools follow, rather than lead practice. Potential biases of the AUC method include variability based upon composition of the expert panel (Coulter, Adams, & Shekelle, 1995) and misclassification bias found within clinical vignettes resulting in the propagation of clinical dogma (Hodgson et al, 2005). The method may in fact perpetuate and even legitimize established practice biases and norms. Referring providers at both institutions might have been closely following the published 2005 AUC algorithm.…”
Section: Discussionmentioning
confidence: 99%