2019
DOI: 10.1145/3359178
|View full text |Cite
|
Sign up to set email alerts
|

Understanding Expert Disagreement in Medical Data Analysis through Structured Adjudication

Abstract: Expert disagreement is pervasive in clinical decision making and collective adjudication is a useful approach for resolving divergent assessments. Prior work shows that expert disagreement can arise due to diverse factors including expert background, the quality and presentation of data, and guideline clarity. In this work, we study how these factors predict initial discrepancies in the context of medical time series analysis, examining why certain disagreements persist after adjudication, and how adjudication… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(23 citation statements)
references
References 27 publications
1
22
0
Order By: Relevance
“…Our results show that it is possible to build computational models of "near-peer" disagreement. Additionally, they provide support for the empirical observations of disagreement adjudication among medical experts [34,35], where the authors observe that the differences in experts' backgrounds increase the degree of disagreement.…”
Section: Discussion and Possible Extensions Of This Worksupporting
confidence: 57%
See 1 more Smart Citation
“…Our results show that it is possible to build computational models of "near-peer" disagreement. Additionally, they provide support for the empirical observations of disagreement adjudication among medical experts [34,35], where the authors observe that the differences in experts' backgrounds increase the degree of disagreement.…”
Section: Discussion and Possible Extensions Of This Worksupporting
confidence: 57%
“…Once medical experts express their disagreements, what happens next? Observations from disagreement adjudication are analyzed in [34,35], where the authors observe (among other things) that the differences in experts' backgrounds increase the degree of disagreement.…”
Section: Other Work On Disagreements and Contradictionsmentioning
confidence: 99%
“…If labels are assigned by human experts, describe methods in detail. Describe any efforts to quantify, and mitigate, intra-and inter-observer labeling differences 5 . Also, describe how closely the temporal alignment of the labels relates to the data segments being assigned.…”
Section: The Datasets Used For Model Development Validation and Tesmentioning
confidence: 99%
“…Glaucoma images, in fact medical images in general, are usually labeled by multiple experts independently, so as to avoid the subjective bias or potential labeling noise of each rater resulted by different levels of expertise, negligence of subtle symptoms, quality of images, etc [13]. The final ground-truth label then can be obtained by fusing individual labels using majority vote, average or other fusion strategies [13].…”
Section: Arxiv:200714848v1 [Cscv] 29 Jul 2020mentioning
confidence: 99%
“…Glaucoma images, in fact medical images in general, are usually labeled by multiple experts independently, so as to avoid the subjective bias or potential labeling noise of each rater resulted by different levels of expertise, negligence of subtle symptoms, quality of images, etc [13]. The final ground-truth label then can be obtained by fusing individual labels using majority vote, average or other fusion strategies [13]. However, at model training stage, only the final ground-truth label is utilized to train the model and those intermediate labels generated by individual raters are neglected, which contain important information regarding the gradeability or difficulty levels of the images.…”
Section: Arxiv:200714848v1 [Cscv] 29 Jul 2020mentioning
confidence: 99%