2021
DOI: 10.1016/j.ejor.2020.06.042
|View full text |Cite
|
Sign up to set email alerts
|

Superforecasting reality check: Evidence from a small pool of experts and expedited identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 40 publications
0
7
0
Order By: Relevance
“…Seekers often estimate that the main cost of an idea challenge comes from the reward itself; yet, making mistakes in the selection process for "building a crowd" (Dahlander & Piezunka, 2020) and picking the wrong idea might result in the seeker wasting time and money. Also, the scarcity of resources for engaging a vast number of external experts or judges and the effectiveness of a limited number of judges, has been questioned in the literature interested in crowdsourcing for judgmental and superforecasting (Katsagounos, Thomakos, Litsiou, & Nikolopoulos, 2021). Accordingly, we argue that the research contribution can be classified as "exaptation" (Gregor & Hevner, 2013), a known solution to a new problem.…”
Section: Introductionmentioning
confidence: 95%
“…Seekers often estimate that the main cost of an idea challenge comes from the reward itself; yet, making mistakes in the selection process for "building a crowd" (Dahlander & Piezunka, 2020) and picking the wrong idea might result in the seeker wasting time and money. Also, the scarcity of resources for engaging a vast number of external experts or judges and the effectiveness of a limited number of judges, has been questioned in the literature interested in crowdsourcing for judgmental and superforecasting (Katsagounos, Thomakos, Litsiou, & Nikolopoulos, 2021). Accordingly, we argue that the research contribution can be classified as "exaptation" (Gregor & Hevner, 2013), a known solution to a new problem.…”
Section: Introductionmentioning
confidence: 95%
“…There are probably correlates of decision tools' predictive validity that are observable 77,78,[110][111][112]79,93,98,[105][106][107][108][109] . There is a readable management literature 101,104 , reflecting a large technical literature 102,103,[113][114][115][116][117][118][119] , on how to tackle difficult evaluation tasks, and on how to decide if the benefit of evaluation is likely to outweigh the costs 101 (Figure 4). But this knowledge has been under-deployed, perhaps because the value of measurement is not clear unless one runs the decision theoretic maths 6,86 (Figure 3, Figure 4).…”
Section: [H2] Decision Tool Evaluationmentioning
confidence: 99%
“…Subsequent research after the superforecasting experiment has included exploring further optimal forecasting tournament preparation (Horowtiz et al 2021;Katsagounos et al 2021) and extending Tetlock and Mellers' approach to answer broader, more time-distant questions (Page, Aiken, and Murdick 2020). It should be noted that there have been no recent advances on computational toolkits for the field similar to that proposed in this paper.…”
Section: Forecastingmentioning
confidence: 99%