2016
DOI: 10.21037/apm.2016.03.02
|View full text |Cite
|
Sign up to set email alerts
|

Inter-rater reliability in performance status assessment among health care professionals: a systematic review

Abstract: The existing literature cites both good and bad inter-rater reliability of PS scores. It is difficult to conclude which HCPs' PS assessments are more accurate.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
64
1
2

Year Published

2017
2017
2023
2023

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 84 publications
(69 citation statements)
references
References 0 publications
2
64
1
2
Order By: Relevance
“…Correlation of PS with survival has been well documented for different types of malignancies [3][4][5], and considering that overall survival is the most reliable and preferred cancer endpoint [6], accurate assessment of PS is a powerful tool for appropriate selection of study subjects. Besides not unanimous agreement on the general results of PS scales with variable inter-expert agreement in a wide range from kappa=0.19 to kappa=0.92 [2], the methodology of assessment does not suggest well-defined criteria and is based on the ability to perform working and self-service activities [7]. The rationale is to assess the influence of the main oncology disease on these parameters, but in the absence of a clear and widely-recognized guideline, there is the potential to perform such assessment irrespective of cancer pathology, mixing up consequences of cancer and other concomitant conditions.…”
Section: Discussionmentioning
confidence: 92%
See 1 more Smart Citation
“…Correlation of PS with survival has been well documented for different types of malignancies [3][4][5], and considering that overall survival is the most reliable and preferred cancer endpoint [6], accurate assessment of PS is a powerful tool for appropriate selection of study subjects. Besides not unanimous agreement on the general results of PS scales with variable inter-expert agreement in a wide range from kappa=0.19 to kappa=0.92 [2], the methodology of assessment does not suggest well-defined criteria and is based on the ability to perform working and self-service activities [7]. The rationale is to assess the influence of the main oncology disease on these parameters, but in the absence of a clear and widely-recognized guideline, there is the potential to perform such assessment irrespective of cancer pathology, mixing up consequences of cancer and other concomitant conditions.…”
Section: Discussionmentioning
confidence: 92%
“…Reliability of every method that requires a subjective judgment is determined by concordance rate between observers or reviewers. Opinions differ on PS scales' reliability, ranging from good concordance between observers to significant differences in opinion [2].…”
Section: Introductionmentioning
confidence: 99%
“…Both scores are prone to inter-observer variability. [36][37][38] There is only moderate correlation between physician and patient reported performance status, with patients scoring themselves more pessimistically than their physicians. [39][40][41] In cancer patients, discordance between physician assessed and patient assessed performance status is associated with worse survival.…”
Section: Frailty Syndrome In Hsct Recipientsmentioning
confidence: 99%
“…Multiple studies have assessed the inter‐rater agreement between physicians, nurses, and other health care professionals in scoring performance status of patients with cancer. Surprisingly, physician and nurse inter‐rater agreement in KPS and ECOG‐PS scores is not always robust and has been shown to vary significantly across the reported literature (Cohen's κ coefficient ranging between 0.23 and 0.77) . The majority of studies suggest that physicians tend to report “healthier” ECOG‐PS or KPS scores than nurses .…”
Section: Introductionmentioning
confidence: 97%