2021
DOI: 10.6339/jds.201701_15(1).0001
|View full text |Cite
|
Sign up to set email alerts
|

Assessing agreement between raters from the point of coefficients and loglinear models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 50 publications
0
8
0
Order By: Relevance
“…Even there are many suggested weighting schemes, linear and quadratic weights are the well-known ones. For different weighting schemes in the literature, see [2].…”
Section: Inter-rater Agreement Coefficientsmentioning
confidence: 99%
See 1 more Smart Citation
“…Even there are many suggested weighting schemes, linear and quadratic weights are the well-known ones. For different weighting schemes in the literature, see [2].…”
Section: Inter-rater Agreement Coefficientsmentioning
confidence: 99%
“…Square contingency tables are occurred with the same row and column classification [1] and are frequently used in many fields, such as medicine, sociology, and behavioral sciences [2]. When working on these kinds of tables, the inter-rater reliability of row and column variables is investigated.…”
Section: Introductionmentioning
confidence: 99%
“…A number of theoretical and methodological approaches have been proposed over the years in different disciplines for the assessment of rater repeatability and/or reproducibility; these approaches can be grouped in two main families: index‐based approach and model‐based approach. The former quantifies the rater agreement level in a single number and does not provide insight into the structure and nature of agreement differences; 5‐8 the latter overcomes this criticism and models the ratings provided by each rater to each subject focusing on the association structure between repeated evaluations 9,10 …”
Section: Introductionmentioning
confidence: 99%
“…The selection of the weighting scheme is clearly articulated by Gwet 4 (see Figure 3.6.1 in Gwet 4 ). The review of agreement measures for calculating inter-rater agreement among two or more raters are represented by Banerjee et al 5 , Gwet 4 , Yilmaz and Saracbasi 6 and recently Tran et al 3 . The number of raters can be two or more.…”
Section: Introductionmentioning
confidence: 99%
“…Our approach not only improves the accuracy and the robustness of the agreement measures against the grey zones, but also readily produces interval estimates for the degree of inter-rater agreement. The interpretation of the value of an agreement measure in terms of the strength of the agreement has not been strictly standardized yet 6 . Having a Bayesian interval estimate for the Kappa measure enhances the interpretation of the degree of agreement between the raters by providing probability limits.…”
Section: Introductionmentioning
confidence: 99%