2013
DOI: 10.1093/ejo/cjs074
|View full text |Cite
|
Sign up to set email alerts
|

Reliability of four different computerized cephalometric analysis programs: a methodological error

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

1
34
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 43 publications
(35 citation statements)
references
References 1 publication
1
34
0
Order By: Relevance
“…Second, Kappa value also depends on the number of categories. 2,[4][5][6] Finally, the third important flaw is when the two raters have unequal marginal distributions of their responses. Therefore, the weighted Kappa would be a good choice to investigate intra-observer reliability.…”
Section: Dear Editormentioning
confidence: 99%
See 2 more Smart Citations
“…Second, Kappa value also depends on the number of categories. 2,[4][5][6] Finally, the third important flaw is when the two raters have unequal marginal distributions of their responses. Therefore, the weighted Kappa would be a good choice to investigate intra-observer reliability.…”
Section: Dear Editormentioning
confidence: 99%
“…In this letter, we discussed statistical approaches to reliability and important limitations of applying Kappa coefficient to assess reliability. [2][3][4][5][6] Any conclusion in reliability analyses needs to be supported by the methodological and statistical issues mentioned above. Otherwise, misinterpretation cannot be avoided.…”
Section: Dear Editormentioning
confidence: 99%
See 1 more Smart Citation
“…Reliability (repeatability or reproducibility) is being assessed by different statistical tests such as Pearson r, least square and paired t test which all of them are among common mistakes in reliability [2]. Briefly, for quantitative variable Intra-class Correlation Coefficient (ICC) and for qualitative variables weighted kappa should be used with caution because kappa has its own limitation too [3,4,[7][8][9][10]. Two important weaknesses of k value to assess agreement of a qualitative variable are as follows: It depends upon the prevalence in each category which means it can be possible to have different kappa value having the same percentage for both concordant and discordant cells!…”
mentioning
confidence: 99%
“…Otherwise, misdiagnosis and mismanagement of the patients cannot be avoided [3][4][5][6][7][8][9][10].…”
mentioning
confidence: 99%