1992
DOI: 10.1037/0021-9010.77.2.161
|View full text |Cite
|
Sign up to set email alerts
|

A disagreement about within-group agreement: Disentangling issues of consistency versus consensus.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

10
379
0
4

Year Published

1997
1997
2012
2012

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 433 publications
(393 citation statements)
references
References 28 publications
10
379
0
4
Order By: Relevance
“…And yet Kozlowski and Hattrup (1992), among others, recognized that individuals working within bounded organizational contexts (e.g., teams) may encounter homogenous situational factors that lead to shared interpretations and collective response tendencies. Importantly, functional relationships at more than one level of analysis cannot be assumed equivalent (Kozlowski & Klein, 2000).…”
Section: Rationale For the Present Conceptual Schemementioning
confidence: 99%
“…And yet Kozlowski and Hattrup (1992), among others, recognized that individuals working within bounded organizational contexts (e.g., teams) may encounter homogenous situational factors that lead to shared interpretations and collective response tendencies. Importantly, functional relationships at more than one level of analysis cannot be assumed equivalent (Kozlowski & Klein, 2000).…”
Section: Rationale For the Present Conceptual Schemementioning
confidence: 99%
“…With instrument reliability we refer in a broad sense to features of stability and consistency in instrument use by different raters [1]. Measuring the level of interrater agreement informs about the extent to which different raters essentially make the same assessments [2][3][4]. This fundamental aspect of instrument reliability concerns the stability of assessments across different raters.…”
Section: Introductionmentioning
confidence: 99%
“…This fundamental aspect of instrument reliability concerns the stability of assessments across different raters. For an instrument to be regarded as reliable though, it is equally important that there is a consistency among raters with respect to assessment variability [2][3][4]. To achieve high reliability, in addition to high agreement it must also be possible to detect true variation by means of the assessments [5].…”
Section: Introductionmentioning
confidence: 99%
“…The r wg index is also not a statistical measure of similarity in the rank order of items across clinicians; instead, it measures whether or not clinicians are giving essentially the same ratings on treatment belief items. Specifically, the r wg index represents the proportion of observed variance compared to the proportion of variance one may expect with random responding on the item (Kozlowski & Hattrup, 1992). Typically, an r wg of .70 or greater is treated as evidence that there is sufficient agreement among raters to justify aggregating responses to a group level, i.e., that a particular perception can be treated as shared at the group level of analysis within a specific aggregation of individuals (Klein & Kozlowski, 2000).…”
Section: Analysesmentioning
confidence: 99%
“…These less conservative analyses are useful for determining whether a "shared vision" exists, and if so, at what level of grouping (i.e., organization, recovery status, methadone clinic, or research affiliation) it exists. To make these assessments, we relied on the r wg index, which is a measure of agreement or consensus within a group (Kozlowski & Hattrup, 1992;James, Demaree, & Wolf, 1993). In the current study, r wg represents the degree of similarity among clinics, in recovery/not in recovery clinicians' ratings of treatment beliefs, methadone/non-methadone clinic, and research/non-research affiliated clinic, or the extent to which these groups of clinicians give the same or very similar ratings to the treatment belief items.…”
Section: Analysesmentioning
confidence: 99%