2020
DOI: 10.1007/s11356-020-07761-0
|View full text |Cite
|
Sign up to set email alerts
|

Better define beta–optimizing MDD (minimum detectable difference) when interpreting treatment-related effects of pesticides in semi-field and field studies

Abstract: The minimum detectable difference (MDD) is a measure of the difference between the means of a treatment and the control that must exist to detect a statistically significant effect. It is a measure at a defined level of probability and a given variability of the data. It provides an indication for the robustness of statistically derived effect thresholds such as the lowest observed effect concentration (LOEC) and the no observed effect concentration (NOEC) when interpreting treatment-related effects on a popul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…The difficulty is that we want to get knowledge about the real world (real effect), but the MDD only provides information about an effect on the measurement level: for a given sample size and estimated variance, an estimated (i.e., measured) effect of the size of the MDD would be significant, always. A real effect size equal to the MDD, however, will be significant in only 50 to 60% of the conducted experiments (see The MDD Does Not Have Controlled Power section; Duquesne et al 2020), which is far below the usually targeted power of 80% (European Food Safety Authority 2013; European Food Safety Authority PPR Panel 2015). The same differentiation has to be applied to the interpretation of the MDD as an upper bound for the real effect: for a nonsignificant experiment, the estimated effect will always be smaller than the MDD (information that is already covered by the nonsignificant p value; see also Hoenig and Heisey 2001; Colegrave and Ruxton 2003).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…The difficulty is that we want to get knowledge about the real world (real effect), but the MDD only provides information about an effect on the measurement level: for a given sample size and estimated variance, an estimated (i.e., measured) effect of the size of the MDD would be significant, always. A real effect size equal to the MDD, however, will be significant in only 50 to 60% of the conducted experiments (see The MDD Does Not Have Controlled Power section; Duquesne et al 2020), which is far below the usually targeted power of 80% (European Food Safety Authority 2013; European Food Safety Authority PPR Panel 2015). The same differentiation has to be applied to the interpretation of the MDD as an upper bound for the real effect: for a nonsignificant experiment, the estimated effect will always be smaller than the MDD (information that is already covered by the nonsignificant p value; see also Hoenig and Heisey 2001; Colegrave and Ruxton 2003).…”
Section: Discussionmentioning
confidence: 99%
“…For a one‐sided t test, the corresponding power is the area below the t ‐distribution from t ‐critical (which corresponds to the MDD; see Figure 3) to infinity (red area below the solid line in Figure 4A). If the t ‐distribution for a real effect of size MDD (i.e., around t ‐critical) was symmetric (which is true asymptotically), the corresponding power would be 50%, because the values of the t ‐distribution would fall in equal proportions on either side of t ‐critical (Hoenig and Heisey 2001; Duquesne et al 2020). Thus, asymptotically, the MDD has a power of 50%, which would not be considered high according to normal statistical standards.…”
Section: Clarifying a Common Misunderstanding: The Mdd Does Not Have mentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, the MDD calculation allows the assessment of risks considering the different effect classes and specific protection goals as proposed by the EFSA's opinion (EFSA PPR Panel, 2017). In our study, the MDD (Duquesne et al, 2020) of our Dunnett's tests ranged from 74,3% to 83,9%, using power calculations for t tests with R software and transposing the MDDs obtained to the Dunnett's test with 6 groups of 4 samples with a bootstrap approach. A pre-sampling before the beginning of the study is also recommended in order to get an overview of soil organism distribution in the field and to balance sampling size and statistical relevance.…”
Section: Methodological Shortcomings Of the Standard Test Guidelinementioning
confidence: 99%