Expert judgement informs a variety of important applications in conservation andnatural resource management, including threatened species management, environmental impact assessment and structured decision-making. However, expert judgements can be prone to contextual biases. Structured elicitation protocols mitigate these biases, and improve the accuracy and transparency of the resulting judgements. Despite this, the elicitation of expert judgement within conservation and natural resource management remains largely informal. We suggest this may be attributed to financial and practical constraints, which are not addressed by many existing structured elicitation protocols.2. In this paper, we advocate that structured elicitation protocols must be adopted when expert judgements are used to inform science. In order to motivate a wider adoption of structured elicitation protocols, we outline the IDEA protocol. The protocol improves the accuracy of expert judgements and includes several key steps which may be familiar to many conservation researchers, such as the four-step elicitation, and a modified Delphi procedure ("Investigate," "Discuss," "Estimate" and "Aggregate"). It can also incorporate remote elicitation, making structured expert judgement accessible on a modest budget.3. The IDEA protocol has recently been outlined in the scientific literature; however, a detailed description has been missing. This paper fills that important gap by clearly outlining each of the steps required to prepare for and undertake an elicitation. 4. While this paper focuses on the need for the IDEA protocol within conservation and natural resource management, the protocol (and the advice contained in this paper) is applicable to a broad range of scientific domains, as evidenced by its application to biosecurity, engineering and political forecasting. By clearly outlining the IDEA protocol, we hope that structured protocols will be more widely understood and adopted, resulting in improved judgements and increased transparency when expert judgement is required. K E Y W O R D SDelphi, expert elicitation, forecasting, four-step elicitation, IDEA protocol, quantitative estimates, structured expert judgement
Expert judgements are essential when time and resources are stretched or we face novel dilemmas requiring fast solutions. Good advice can save lives and large sums of money. Typically, experts are defined by their qualifications, track record and experience [1], [2]. The social expectation hypothesis argues that more highly regarded and more experienced experts will give better advice. We asked experts to predict how they will perform, and how their peers will perform, on sets of questions. The results indicate that the way experts regard each other is consistent, but unfortunately, ranks are a poor guide to actual performance. Expert advice will be more accurate if technical decisions routinely use broadly-defined expert groups, structured question protocols and feedback.
People interpret verbal expressions of probabilities (e.g. ‘very likely’) in different ways, yet words are commonly preferred to numbers when communicating uncertainty. Simply providing numerical translations alongside reports or text containing verbal probabilities should encourage consistency, but these guidelines are often ignored. In an online experiment with 924 participants, we compared four different formats for presenting verbal probabilities with the numerical guidelines used in the US Intelligence Community Directive (ICD) 203 to see whether any could improve the correspondence between the intended meaning and participants’ interpretation (‘in-context’). This extends previous work in the domain of climate science. The four experimental conditions we tested were: 1. numerical guidelines bracketed in text, e.g. X is very unlikely (05–20%) , 2. click to see the full guidelines table in a new window, 3. numerical guidelines appear in a mouse over tool tip, and 4. no guidelines provided (control). Results indicate that correspondence with the ICD 203 standard is substantially improved only when numerical guidelines are bracketed in text. For this condition, average correspondence was 66%, compared with 32% in the control. We also elicited ‘context-free’ numerical judgements from participants for each of the seven verbal probability expressions contained in ICD 203 (i.e., we asked participants what range of numbers they, personally, would assign to those expressions), and constructed ‘evidence-based lexicons’ based on two methods from similar research, ‘membership functions’ and ‘peak values’, that reflect our large sample’s intuitive translations of the terms. Better aligning the intended and assumed meaning of fuzzy words like ‘unlikely’ can reduce communication problems between the reporter and receiver of probabilistic information. In turn, this can improve decision making under uncertainty.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.