2023
DOI: 10.1177/16094069231160973
|View full text |Cite
|
Sign up to set email alerts
|

Developing Shared Ways of Seeing Data: The Perils and Possibilities of Achieving Intercoder Agreement

Abstract: All research, e.g., qualitative or quantitative, is concerned with the extent to which analyses can adequately describe the phenomena it seeks to describe. In qualitative research, we use internal validity checks like intercoder agreement to measure the extent to which independent researchers observe the same phenomena in data. Researchers report indices of agreement to serve as evidence of consistency and dependability of interpretations, and we do so to make claims about the trustworthiness of our research a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…We are currently working on automated models that can reliably classify the discourse categorized in our tool. However, even with human-coding, there are challenges with inter-rater reliability and agreement in talk categories [18], and people in general may not trust AI judgments due to lack of transparency [74,102]. We found that teachers expressed confusion in how the talk codes were categorized even with human coding, which may impact trust and how teachers might perceive any feedback provided from an AI system.…”
Section: Challenges Of Authentic Classroom Studies and Alternativementioning
confidence: 82%
See 1 more Smart Citation
“…We are currently working on automated models that can reliably classify the discourse categorized in our tool. However, even with human-coding, there are challenges with inter-rater reliability and agreement in talk categories [18], and people in general may not trust AI judgments due to lack of transparency [74,102]. We found that teachers expressed confusion in how the talk codes were categorized even with human coding, which may impact trust and how teachers might perceive any feedback provided from an AI system.…”
Section: Challenges Of Authentic Classroom Studies and Alternativementioning
confidence: 82%
“…Miscellaneous talk that did not fall within these categories was labeled as Other Classroom Talk and Other Outside Talk. A subgroup from the research team worked together to achieve intercoder reliability to in a complex and long process that spanned two academic years to achieve a Cohen's Kappa of .70 or above between expert and novice coders which was considered very good agreement [18].…”
Section: Dialogue Categorization and Codingmentioning
confidence: 99%