2020
DOI: 10.1111/conl.12726
|View full text |Cite
|
Sign up to set email alerts
|

Improving scientific rigour in conservation evaluations and a plea deal for transparency on potential biases

Abstract: The delivery of rigorous and unbiased evidence on the effects of interventions lay at the heart of the scientific method. Here we examine scientific papers evaluating agri-environment schemes, the principal instrument to mitigate farmland biodiversity declines worldwide. Despite previous warnings about rudimentary study designs in this field, we found that the majority of studies published between 2008 and 2017 still lack robust study designs to strictly evaluate intervention effects. Potential sources of bias… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 32 publications
(36 citation statements)
references
References 42 publications
0
36
0
Order By: Relevance
“…Evaluating research for its reliability and relevance is commonly called 'critical appraisal'. Assessments of study reliability determine whether there are problems of 'internal validity', such as unrepresentative sampling, inappropriate methods of measurement, or inadequate statistical analyses (Josefsson et al 2020). Assessments of study relevance, or 'external validity', require a detailed description of the methods to determine whether a study's findings are likely to generalize to a question of interest (Cooke et al 2017a) -for example, how similar was the population and/or the environmental conditions to those in my own system?…”
Section: Open Materials (Making Research Results Interpretable)mentioning
confidence: 99%
“…Evaluating research for its reliability and relevance is commonly called 'critical appraisal'. Assessments of study reliability determine whether there are problems of 'internal validity', such as unrepresentative sampling, inappropriate methods of measurement, or inadequate statistical analyses (Josefsson et al 2020). Assessments of study relevance, or 'external validity', require a detailed description of the methods to determine whether a study's findings are likely to generalize to a question of interest (Cooke et al 2017a) -for example, how similar was the population and/or the environmental conditions to those in my own system?…”
Section: Open Materials (Making Research Results Interpretable)mentioning
confidence: 99%
“…Experiments that add elements (e.g. planting shrubs) to landscape plots and compare these to control landscapes will be necessary to disentangle the presence of SPELs from other habitat features (Josefsson et al ., 2020). Moreover, one may predict a non‐linear response of birds to SPELs, such as a positive effect at low SPEL densities and no effect at high SPEL densities (Fischer et al ., 2010 a ).…”
Section: Discussionmentioning
confidence: 99%
“…For example, only 7% of studies examining the effect of logging on tropical forest ecosystems were deemed free of pseudoreplication (Ramage et al, 2013). A recent simulation analysis comparing inference from different study designs demonstrated that a simpler study design produces misleading results (Christie et al, 2019), and evidence suggests that simple designs are widespread when examining conservation interventions (Josefsson et al, 2020). Underreporting of key methodological information compounds these issues by impeding replication efforts and evidence synthesis (Gerstner et al, 2017; Grames & Elphick, 2020).…”
Section: Avoid Flaws In Research Design and Methodsmentioning
confidence: 99%
“…An effective means of addressing these flaws is to promote scientific rigor and transparency through training and materials on best practices (Josefsson et al, 2020; Table S1). Training can include familiarizing authors with practices such as open preregistration and registered reports, which allow feedback on research questions, study design and analyses prior to collecting data (Parker et al, 2019).…”
Section: Avoid Flaws In Research Design and Methodsmentioning
confidence: 99%