2020
DOI: 10.1177/1555343420926287
|View full text |Cite
|
Sign up to set email alerts
|

Improving Analytic Reasoning via Crowdsourcing and Structured Analytic Techniques

Abstract: How might analytic reasoning in intelligence reports be substantially improved? One conjecture is that this can be achieved through a combination of crowdsourcing and structured analytic techniques (SATs). To explore this conjecture, we developed a new crowdsourcing platform supporting groups in collaborative reasoning and intelligence report drafting using a novel SAT we call “Contending Analyses.” In this paper we present findings from a large study designed to assess whether groups of professional analysts … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 34 publications
0
4
0
Order By: Relevance
“…In peer review, which emerged in the Enlightenment and was firmly institutionalized in the 20th century (Shema, 2014), subject matter experts review articles, point outs flaws, and decide on publishability. This process likely improves the quality of the literature because authors must satisfy at least a few peers, and minicrowdsourcing expertise likely generates insights that individuals missed (Surowiecki, 2004; van Gelder et al, 2020). But it has numerous flaws, including interrater reliability that is barely above chance (Bornmann et al, 2010; Forscher et al, 2019; Lee et al, 2013), the singular authority of editors to select the reviewers (often with awareness of reviewers’ predilections), and its failure to prevent the replication crisis.…”
Section: The Goals Of Society and Sciencementioning
confidence: 99%
“…In peer review, which emerged in the Enlightenment and was firmly institutionalized in the 20th century (Shema, 2014), subject matter experts review articles, point outs flaws, and decide on publishability. This process likely improves the quality of the literature because authors must satisfy at least a few peers, and minicrowdsourcing expertise likely generates insights that individuals missed (Surowiecki, 2004; van Gelder et al, 2020). But it has numerous flaws, including interrater reliability that is barely above chance (Bornmann et al, 2010; Forscher et al, 2019; Lee et al, 2013), the singular authority of editors to select the reviewers (often with awareness of reviewers’ predilections), and its failure to prevent the replication crisis.…”
Section: The Goals Of Society and Sciencementioning
confidence: 99%
“…Its main principles (interpretation, analysis, evaluation, explanation and inference) are leveraged by the Intelligence Community (IC) Rating Scale for Evaluating Analytic Tradecraft Standards (ODN 2015) and the so-called sense-making scale developed by Alsufiani et al (2017). Adaptations of both scales have been used to successfully elaborate collaborative intelligence tools (van Gelder et al 2020;De Liddo et al 2021). It will, however, be necessary to attune these scales to the (mis)information ecosystem.…”
Section: Discussionmentioning
confidence: 99%
“…Groups in the public and professional domains can contribute with draft reports and opinions to intelligence requirements. A recent evaluation of SWARM demonstrates improvements in the quality of the reports provided when using the system (van Gelder et al, 2020) highlighting the potential for systems to include crowdsourced contributions.…”
Section: Information Requirements Crowdsourcing and Social Sensingmentioning
confidence: 99%