2020
DOI: 10.21105/joss.02461
|View full text |Cite
|
Sign up to set email alerts
|

oolong: An R package for validating automated content analysis tools

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…This was accomplished using the structural topic model package in R, and captured the top 30 topics across the data set, which we built to contextualize our findings with regard to the third research question. The topic model was validated with a word intrusion test using R's oolong package (C.-h. Chan & Sältzer, 2020) and achieved an acceptable accuracy of 80%, as all topics reported specifically in this article could be successfully identified in word intrusion test.…”
Section: Methodsmentioning
confidence: 73%
“…This was accomplished using the structural topic model package in R, and captured the top 30 topics across the data set, which we built to contextualize our findings with regard to the third research question. The topic model was validated with a word intrusion test using R's oolong package (C.-h. Chan & Sältzer, 2020) and achieved an acceptable accuracy of 80%, as all topics reported specifically in this article could be successfully identified in word intrusion test.…”
Section: Methodsmentioning
confidence: 73%
“…Furthermore, one can also distinguish between validation steps that are universally applicable, and validation steps that are only eligible for specific types of methods (Birkenmaier et al, 2023;Grimmer & Stewart, 2013). The literature on unsupervised methods, for instance, proposes a great variety of metrics and validation steps to demonstrate the consistency of topics for specific variants of topic models (Chan & Sältzer, 2020;Chang et al, 2009;Ying et al, 2022). On the other hand, there are generic validation steps that can be universally applied to different types of CTAM.…”
Section: Generic or Method-specific Validationmentioning
confidence: 99%
“…Often, these method-specific contributions propose valuable workflows, targeted at the unique challenges of validating specific types of CTAM. Recent examples of such method-specific frameworks and guidelines target in particular unsupervised methods (Bernhard et al, 2023;Chan & Sältzer, 2020;Maier et al, 2022;Terragni et al, 2021;Ying et al, 2022) or supervised methods (see Chapter 20 on validation in Grimmer et al, 2022;Park & Montgomery, 2023). While they tend to be geared towards specific methodspecific workflows, these approaches still furnish researchers with practical and valuable guidance.…”
Section: An Outlook On More Unified Validation Frameworkmentioning
confidence: 99%
“…[6][7][8][9][10][11] for details of the model robustness checks. We validated our model choice with the use of manual topic intrusion validation using the R package oolong 90 . We created a Shiny app with an intrusion test which was completed by 6 third-party testers who were unaware of the aims of the research models.…”
Section: Latent Dirichlet Analysis Topic Modelmentioning
confidence: 99%