1991
DOI: 10.1159/000186220
|View full text |Cite
|
Sign up to set email alerts
|

Observer Agreement in the Scoring of the Activity and Chronicity Indexes of Lupus Nephritis

Abstract: The present study was designed to evaluate the observer reliability in the scoring of the activity and chronicity indexes, among three experienced pathologists, in renal biopsies from lupus nephritis (LN). Twenty-five renal biopsies of LN, were evaluated independently by three pathologists to assess the interobserver variability. For the intraobserver agreement, 5 biopsies were evaluated twice by each pathologist. The interobserver agreement for the scoring of the activity and chronicity indexes was 0.81 and 0… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
11
0

Year Published

1993
1993
2020
2020

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(13 citation statements)
references
References 8 publications
2
11
0
Order By: Relevance
“…Our findings that clinicopathologic correlations were similar at biopsies 1 and 2 does suggest that biopsy interpretation was performed in a consistent manner, especially given that the pathologist was blind to the patients' clinical status. However, given the controversy in the literature as to inter-and intra-observer reliability of the NIH scoring system [30][31][32], future pediatric studies should evaluate these characteristics of the NIH index and the TIAI.…”
Section: Discussionmentioning
confidence: 99%
“…Our findings that clinicopathologic correlations were similar at biopsies 1 and 2 does suggest that biopsy interpretation was performed in a consistent manner, especially given that the pathologist was blind to the patients' clinical status. However, given the controversy in the literature as to inter-and intra-observer reliability of the NIH scoring system [30][31][32], future pediatric studies should evaluate these characteristics of the NIH index and the TIAI.…”
Section: Discussionmentioning
confidence: 99%
“…Kappa statistics were used to assess intra‐ and inter‐observer reproducibility of each pathological index. A value of κ greater than 0.75 was taken as an arbitrary index of excellent agreement beyond chance expectation, while a value of κ between 0.4 and 0.75 was regarded as fair to good concordance 19 . P < 0.05 was considered statistically significant.…”
Section: Methodsmentioning
confidence: 99%
“…Indeed, any medical imaging technique involving human reporting is conditioned by a significant interpretative subjectivity. This poor reproducibility has been repeatedly documented in the old kidney histopathological literature [26][27][28][29] and has been recently confirmed in studies concerning the application of Deep Learning in kidney pathology 1,2 . The indices of agreement between human pathologists vary according to the metrics used, the preparation of the specimen, the histological parameters assessed, and the kidney disease considered; overall, reported agreement rate between human kidney pathologists is fair to moderate with agreement ratio ranging between 0.3 and 0.6.…”
Section: Discussionmentioning
confidence: 79%