Bayes Nets (BNs) are extremely useful for causal and probabilistic modelling in many real-world applications, often built with information elicited from groups of domain experts. But their potential for reasoning and decision support has been limited by two major factors: the need for significant normative knowledge, and the lack of any validated methods or software supporting collaboration. Consequently, we have developed a web-based structured technique – Bayesian Argumentation via Delphi (BARD) – to enable groups of domain experts to receive minimal normative training and then collaborate effectively to produce high-quality BNs. BARD harnesses multiple perspectives on a problem, while minimising biases manifest in freely interacting groups, via a Delphi process: solutions are first produced individually, then shared, followed by an opportunity for individuals to revise their solutions. To test the hypothesis that BNs improve due to Delphi, we conducted an experiment whereby individuals with a little BN training and practice produced structural models using BARD for two Bayesian reasoning problems. Participants then received 6 other structural models for each problem, rated their quality on a 7-point scale, and revised their own models if they wished. Both top-rated and revised models were on average significantly better quality (scored against a gold-standard) than the initial models, with large and medium effect sizes. We conclude that Delphi – and BARD – improves the quality of BNs produced by groups. Further, although rating cannot create new models, rating seems quicker and easier than revision and yielded significantly better models – so, we suggest efficient BN amalgamation should include both.
Groups provide several benefits over individuals for judgment and decision making, but they suffer from problems too. Structured-group techniques, like Delphi, use strictly controlled information exchange between individuals to retain positive aspects of group interaction, while ameliorating negative. These methods regularly use ‘nominal’ groups that interact in a remote, distributed, and often anonymous manner, thus lending themselves to internet applications, with a consequent recent increase in popularity. However, evidence for the utility of the techniques is scant, major reasons for which being difficulties maintaining experimental control and logistical problems in recruiting sufficient empirical ‘groups’ to produce statistically meaningful results. As a solution, we present the Simulated Group Response Paradigm, where individual responses are first elicited in a pre-study – or created by the experimenter – then subsequently fed back to highly-controlled simulated groups. This paradigm facilitates investigation of factors leading to virtuous opinion change in groups, and subsequent development of structured-group techniques.
Groups often make better judgements than individuals, and recent research suggests that this phenomenon extends to the deception detection domain. The present research investigated whether the influence of groups enhances the accuracy of judgements, and whether group size influences deception detection accuracy. Two‐hundred fifty participants evaluated written statements with a pre‐established detection accuracy rate of 60% in terms of veracity before viewing either the judgements and rationales of several other group members or a short summary of the written statement and revising or restating their own judgements accordingly. Participants' second responses were significantly more accurate than their first, suggesting a small positive effect of structured groups on deception detection accuracy. Group size did not have a significant effect on detection accuracy. The present work extends our understanding of the utility of group deception detection, suggesting that asynchronous, structured groups outperform individuals at detecting deception.
An experimental study to explore the effect of Delphi group size and opinion diversity on participants' experience of the process, as measured by their perceived cognitive load and their self-reported satisfaction with the process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.