2020
DOI: 10.1111/acem.13941
|View full text |Cite
|
Sign up to set email alerts
|

Measuring Agreement Among Prehospital Providers and Physicians in Patient Capacity Determination

Abstract: ObjectivesIf a patient wishes to refuse treatment in the prehospital setting, prehospital providers and consulting emergency physicians must establish that the patient possesses the capacity to do so. The objective of this study is to assess agreement among prehospital providers and emergency physicians in performing patient capacity assessments.MethodsThis study involved 139 prehospital providers and 28 emergency medicine physicians. Study participants listened to 30 medical control calls pertaining to patien… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…An expert panel comprising two epidemiologists, two public health professionals, two medical microbiologists, and two medical doctors assessed the content validity of each item in the questionnaire on a Likert scale ranging from "not relevant, somewhat relevant, quite relevant, to highly relevant." These responses were then analyzed using Fleiss' Multi-rater kappa to obtain the inter-rater agreement value at 95% CI (O'Connor et al, 2020). The value of kappa was inferred based on the following guide: values ≤ 0 indicate no agreement, 0.01-0.20 indicate none to a slight agreement, 0.21-0.40 as fair agreement, 0.41-0.60 as moderate agreement, 0.61-0.80 as substantial agreement, and 0.81-1.00 as almost perfect agreement (McHugh, 2012).…”
Section: Statistical Analysis: Validation Studiesmentioning
confidence: 99%
“…An expert panel comprising two epidemiologists, two public health professionals, two medical microbiologists, and two medical doctors assessed the content validity of each item in the questionnaire on a Likert scale ranging from "not relevant, somewhat relevant, quite relevant, to highly relevant." These responses were then analyzed using Fleiss' Multi-rater kappa to obtain the inter-rater agreement value at 95% CI (O'Connor et al, 2020). The value of kappa was inferred based on the following guide: values ≤ 0 indicate no agreement, 0.01-0.20 indicate none to a slight agreement, 0.21-0.40 as fair agreement, 0.41-0.60 as moderate agreement, 0.61-0.80 as substantial agreement, and 0.81-1.00 as almost perfect agreement (McHugh, 2012).…”
Section: Statistical Analysis: Validation Studiesmentioning
confidence: 99%