1997
DOI: 10.1037/1082-989x.2.4.357
|View full text |Cite
|
Sign up to set email alerts
|

Detecting sequential patterns and determining their reliability with fallible observers.

Abstract: On the basis of recent work by W. Gardner (1995), implications of fallible observers for observational research are discussed. Analysis shows that for identically fallible observers, values for kappa are lower when codes are few and their simple probabilities variable than when codes are many and roughly equiprobable; thus no

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
114
0
7

Year Published

2001
2001
2020
2020

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 170 publications
(123 citation statements)
references
References 16 publications
2
114
0
7
Order By: Relevance
“…The kappas for the categories that we examined ranged from .66 to .92. Even though kappas greater than .70 are most desirable, kappas for dichotomous codes are often underestimates of interrater reliability (Bakeman, Quera, McArthur, & Robinson, 1997). Table 2 lists the kappa coefficients for each code.…”
Section: Behavioral Reactionsmentioning
confidence: 99%
“…The kappas for the categories that we examined ranged from .66 to .92. Even though kappas greater than .70 are most desirable, kappas for dichotomous codes are often underestimates of interrater reliability (Bakeman, Quera, McArthur, & Robinson, 1997). Table 2 lists the kappa coefficients for each code.…”
Section: Behavioral Reactionsmentioning
confidence: 99%
“…Note that the reliability of the composite measures (far right column), each composed of two to three indicators, tends to be somewhat higher than that of the discrete indicators, with three out of four exceeding .80 and one just shy of that level at .78. Since the statistical analysis is based mainly on composite measures, and since Kappa scores above .75 are generally understood to indicate at least 90 percent agreement (and even greater agreement for coding categories with few codes; see Bakeman et al 1997), the system is demonstrably reliable.…”
Section: Watchdog Journalism and Presidential News Conferences--29mentioning
confidence: 99%
“…28 For example, in one simulation, κ varied from 0.49 for 2 codes to 0.69 for 10 codes for observers who were 85% accurate. 29 These considerations notwithstanding, we conclude that the present prespecified analysis in 3345 patients undergoing primary PCI from the HORIZONS-AMI trial demonstrates that the concordance between Op and ACL assessment of angiographic markers of reperfusion in STEMI is only fair to moderate and that there is directionality in these disagreements, with the Op underestimating favorable findings pre-PCI and overestimating their rate post-PCI. Because primary PCI is successful in most patients, the impact of this variability is relatively minor in terms of mortality prediction.…”
Section: Discussionmentioning
confidence: 59%