2005
DOI: 10.1111/j.1399-0012.2005.00377.x
|View full text |Cite
|
Sign up to set email alerts
|

Reproducibility of the Banff classification in subclinical kidney transplant rejection

Abstract: The Banff classification for kidney allograft pathology has proved to be reproducible, but its inter and intraobserver agreement can vary substantially among centres. The aim of this study was to evaluate Banff reproducibility of surveillance renal allograft biopsies among renal pathologists from different transplant centres. This study included 32 renal transplant patients with stable graft function. Biopsies were performed 2 and 12 months post-transplant. Histology was interpreted according to the Banff sche… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
57
0

Year Published

2006
2006
2019
2019

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 71 publications
(59 citation statements)
references
References 14 publications
2
57
0
Order By: Relevance
“…Interobserver agreement for acute rejection (j = 0.77) and scores of glomerulitis, intimal arteritis, interstitial infiltrates and tubulitis were good (17). Similar results were seen by Veronese et al for acute rejection (j = 0.47-0.72); however, agreement for borderline rejection and reproducibility for other scores and grades showed a substantial interobserver variation (18). The subcategorization of SCR into "acute" and "borderline" by the Banff system is also problematic, as the severity of rejection critically hinges on the tubulitis score, which in turn controls the final rejection grade, and defaults to the most severe lesion.…”
Section: Reliability Of Protocol Histology Resultssupporting
confidence: 67%
“…Interobserver agreement for acute rejection (j = 0.77) and scores of glomerulitis, intimal arteritis, interstitial infiltrates and tubulitis were good (17). Similar results were seen by Veronese et al for acute rejection (j = 0.47-0.72); however, agreement for borderline rejection and reproducibility for other scores and grades showed a substantial interobserver variation (18). The subcategorization of SCR into "acute" and "borderline" by the Banff system is also problematic, as the severity of rejection critically hinges on the tubulitis score, which in turn controls the final rejection grade, and defaults to the most severe lesion.…”
Section: Reliability Of Protocol Histology Resultssupporting
confidence: 67%
“…Although assignment of i Ն1 may vary for individual pathologists, 11,41,42,45 the results of IHC, TLDA, and microarrays confirmed that biopsies assigned to IFϩi had distinct characteristics compared with those interpreted as having normal histology or IF alone. Targeted reverse transcriptase-PCR analysis demonstrated elevated expression of multiple innate and adaptive immune mediators consistent with tissue injury response, Th1-type T cell response, and suppression of counterregulatory pathways.…”
Section: Discussionmentioning
confidence: 93%
“…26 It has been reported that visual assessment of IF with scoring (IFTA) according to the Banff classification suffers from poor interobserver reproducibility. 3,27 Moreover, the use of a small number of grades to describe the severity of individual histologic injuries may lack sensitivity during the early stage of IFTA. 28 To improve reproducibility and the performance of IF quantification, several techniques for computerized image analysis of IF in renal biopsies (stained by red Sirius or Masson trichrome) have been developed over the last decade.…”
Section: Discussionmentioning
confidence: 99%