2018
DOI: 10.1002/sim.7679
|View full text |Cite
|
Sign up to set email alerts
|

Performance of intraclass correlation coefficient (ICC) as a reliability index under various distributions in scale reliability studies

Abstract: Many published scale validation studies determine inter‐rater reliability using the intra‐class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
78
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 130 publications
(81 citation statements)
references
References 36 publications
2
78
0
1
Order By: Relevance
“…The agreement percentage and the Kappa statistic are typically used when addressing categorical variables. However, when continuous variables are considered, Pearson correlation coefficient (Pearson's R), Intraclass correlation coefficient (ICC), Brand-Altman plot with limits of agreement (LOA), and coefficient of variance can be used [57][58][59][60][61][62][63]. As the scale of measurement in this study is continuous, Pearson's R, Brand-Altman plot with LOA, and ICC were chosen as reliability measures.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The agreement percentage and the Kappa statistic are typically used when addressing categorical variables. However, when continuous variables are considered, Pearson correlation coefficient (Pearson's R), Intraclass correlation coefficient (ICC), Brand-Altman plot with limits of agreement (LOA), and coefficient of variance can be used [57][58][59][60][61][62][63]. As the scale of measurement in this study is continuous, Pearson's R, Brand-Altman plot with LOA, and ICC were chosen as reliability measures.…”
Section: Methodsmentioning
confidence: 99%
“…This study uses the ICC to measure inter-method reliability between four indices of walkability. The ICC is used to determine the consistency of measurements (reliability): A higher ICC indicates greater consistency [62]. In measuring the degree of agreement between variables, Kappa statistics are used for categorical variables, whereas the ICC is used for numerical or quantitative variables [63].…”
Section: Methodsmentioning
confidence: 99%
“…Fedorov et al calculated ICC values for apparent diffusion coefficient maps calculated using mono‐exponential function for tumor region in 15 PCa patients scanned with 2 different scanners. Although differences in study subjects and applied MRI acquisition protocols of these studies make a direct comparison of ICC values difficult, these studies demonstrated that raw imaging signal and texture features can be repeatable . Recently, public access for prostate MRI of 15 subjects has been provided by Fedorov et al However, radiomic features for non‐Gaussian DWI models have not been evaluated in terms of short‐term repeatability using the same scanner parameters.…”
Section: Introductionmentioning
confidence: 99%
“…The threshold level for statistical significance was p<0.05. The consistency between qualitative grading diagnoses and quantitative grading diagnoses of VUR was calculated using the weighted kappa coefficient and intraclass correlation coefficient (ICC) [23]. Kappa and ICC values were interpreted using the following criteria: ≤0.20, poor; 0.21-0.40, fair; 0.41-0.60, moderate; 0.61-0.80, good; 0.81-1.00, very good [24].…”
Section: Contrast-enhanced Voiding Urosonographymentioning
confidence: 99%