2007
DOI: 10.1038/sj.clpt.6100322
|View full text |Cite
|
Sign up to set email alerts
|

Inter-rater Reliability of a Classification System for Hospital Adverse Drug Event Reports

Abstract: Hospital pharmacovigilance systems frequently classify adverse drug event (ADE) reports on various axes such as severity and type of outcome in an attempt to better detect changes in the frequency of certain types of ADEs. The aim of this study was to measure the inter-observer reliability of an ADE classification system. Two pharmacists and two internal medicine physicians reviewed 150 pharmacist-generated ADE reports and used a structured form to classify reports on four domains: the presence or absence of p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
1
1

Year Published

2008
2008
2019
2019

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 10 publications
2
1
1
Order By: Relevance
“…In the study by Snyder et al 16 each case was assessed, then classified after discussion by the individual raters before moving to the next case. This is in contrast to both the study of Haynes et al 15 (in which raters individually classified only without discussion) and our study, in which only after all cases were classified individually, consensus was reached in a subsequent meeting. We chose for this method, because our goal was to determine agreement between individual assessors prior to consensus building in order to evaluate the 'average' healthcare professional opinion on preventable ADEs.…”
Section: Discussioncontrasting
confidence: 97%
See 2 more Smart Citations
“…In the study by Snyder et al 16 each case was assessed, then classified after discussion by the individual raters before moving to the next case. This is in contrast to both the study of Haynes et al 15 (in which raters individually classified only without discussion) and our study, in which only after all cases were classified individually, consensus was reached in a subsequent meeting. We chose for this method, because our goal was to determine agreement between individual assessors prior to consensus building in order to evaluate the 'average' healthcare professional opinion on preventable ADEs.…”
Section: Discussioncontrasting
confidence: 97%
“…[12][13][14] Our findings underline it is the same for assessing preventable ADEs and their severity from medical charts in everyday practice. This is in line with the recently published study of Haynes et al 15 but surprisingly is in contrast with high inter-rater agreement found by Forrey et al 9 and Snyder et al 16 So, why do some studies find such poor agreement while others do not? First, the level of detail of case information given to the assessors differs.…”
Section: Discussionsupporting
confidence: 88%
See 1 more Smart Citation
“…Unfortunately, poor record quality is consistently cited to be a significant limitation in assessing adverse drug events [2638]. The difficulty of classifying either potential or actual harm is reflected in reports of low inter-rater reliability among clinicians [3941], although experience as a clinician, and engaging more than two clinicians has been shown to improve the consistency of these ratings [42].…”
Section: Introductionmentioning
confidence: 99%