2014
DOI: 10.1111/mec.12705
|View full text |Cite
|
Sign up to set email alerts
|

Genome scan methods against more complex models: when and how much should we trust them?

Abstract: The recent availability of next-generation sequencing (NGS) has made possible the use of dense genetic markers to identify regions of the genome that may be under the influence of selection. Several statistical methods have been developed recently for this purpose. Here, we present the results of an individual-based simulation study investigating the power and error rate of popular or recent genome scan methods: linear regression, Bayescan, BayEnv and LFMM. Contrary to previous studies, we focus on complex, hi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

18
421
1

Year Published

2015
2015
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 287 publications
(440 citation statements)
references
References 30 publications
18
421
1
Order By: Relevance
“…Although both methods essentially operate based on the same principles, that is, they test for GEAs after controlling for the portion of variation that is due to neutral population structure, their results differed greatly ( P. strobus : 19% overlap; P. monticola : no overlap). The relative performance of Bayenv and LFMM depends on the demographic scenario and sampling design, and a relatively low overlap between the two methods has previously been observed in simulation studies (Lotterhos & Whitlock, 2015; de Villemereuil et al., 2014). …”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Although both methods essentially operate based on the same principles, that is, they test for GEAs after controlling for the portion of variation that is due to neutral population structure, their results differed greatly ( P. strobus : 19% overlap; P. monticola : no overlap). The relative performance of Bayenv and LFMM depends on the demographic scenario and sampling design, and a relatively low overlap between the two methods has previously been observed in simulation studies (Lotterhos & Whitlock, 2015; de Villemereuil et al., 2014). …”
Section: Discussionmentioning
confidence: 99%
“…This method uses a hierarchical Bayesian mixed model based on a variant of PCA, in which neutral population structure is introduced via ( k ) unobserved or latent factors. We implemented the LFMM method using the default individual‐based data specification to avoid potential biases due to unequal population sample sizes (de Villemereuil et al., 2014). To determine k , we performed a PCA on individual allele frequencies using the LEA package in R (Frichot & François, 2015).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The reasons for the popularity of GEA analyses are practical: They require no phenotypic data or prior genomic resources, do not require experimental approaches (such as reciprocal transplants) to demonstrate local adaptation, and are often more powerful than differentiation‐based outlier detection methods (De Mita et al., 2013; de Villemereuil, Frichot, Bazin, François, & Gaggiotti, 2014; Forester, Lasky, Wagner, & Urban, 2018; Lotterhos & Whitlock, 2015). In particular, participants considered how and why detection rates differed between univariate and multivariate GEAs, exploring the use of latent factor mixed models (Frichot, Schoville, Bouchard, & Francois, 2013) and redundancy analysis (Forester, Jones, Joost, Landguth, & Lasky, 2016; Lasky et al., 2012), respectively.…”
Section: Improving Downstream Computational Analysesmentioning
confidence: 99%