2018
DOI: 10.1111/2041-210x.13083
|View full text |Cite
|
Sign up to set email alerts
|

Assessing adequacy of models of phyletic evolution in the fossil record

Abstract: Comparing relative fit of different models of evolutionary dynamics to time series of phyletic change is a common tool when interpreting the fossil record. However, a measure of relative fit is no guarantee the preferred model describes the data well. Selecting a good model is essential for robust inferences, but we are currently lacking tools to investigate if a model of phyletic evolution represents an adequate description of trait dynamics in fossil data. This study develops a general statistical framework … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
14
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(15 citation statements)
references
References 42 publications
1
14
0
Order By: Relevance
“…Following Burnham and Anderson (2003), we used a rule of thumb of Δ AICc < 2 to consider models that are not significantly less plausible than the best-fit model. Finally, to assess the adequacy of each model, package ‘adePEM’ (Voje 2018) was used to test the models (for autocorrelation, length of runs, fixed variance over time, and in the case of the stasis model, net evolution) by running a large number (here 10,000) of simulated time series using the parameters of the fitted models and checking whether they are likely to belong to the same distribution (here, the null hypothesis, with p > 0.05 being the qualifying significance). In addition, we applied Spearman's rank correlation, as all these time series showed no autocorrelation, having passed the Box-Pierce test of autocorrelation (Box and Pierce 1970) as implemented in base R by function Box.test , and thus their values can be considered to be independent.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Following Burnham and Anderson (2003), we used a rule of thumb of Δ AICc < 2 to consider models that are not significantly less plausible than the best-fit model. Finally, to assess the adequacy of each model, package ‘adePEM’ (Voje 2018) was used to test the models (for autocorrelation, length of runs, fixed variance over time, and in the case of the stasis model, net evolution) by running a large number (here 10,000) of simulated time series using the parameters of the fitted models and checking whether they are likely to belong to the same distribution (here, the null hypothesis, with p > 0.05 being the qualifying significance). In addition, we applied Spearman's rank correlation, as all these time series showed no autocorrelation, having passed the Box-Pierce test of autocorrelation (Box and Pierce 1970) as implemented in base R by function Box.test , and thus their values can be considered to be independent.…”
Section: Methodsmentioning
confidence: 99%
“…The p -values of the adequacy tests represent the portion, divided by 0.5, of the simulated test statistics that is larger/smaller than the test statistics calculated on the actual data. A test is passed if the value of the test statistic falls within the distribution range provided by simulated test statistics (Voje 2018).…”
Section: Methodsmentioning
confidence: 99%
“…However, the statistical method (Hunt 2006) might be biased toward the URW. The URW can likely generate a wide range of trait dynamics compared with other models, but this does not imply that the majority of lineages evolve according to URW (Voje 2018). However, both URW and GRW models passed the adequacy tests for occurrences-weighted analyses (Table 2).…”
Section: Directionality Versus Random Walkmentioning
confidence: 99%
“…This is similar in spirit to controlling the false positive rate of a hypothesis test but in practice does not map directly onto Type 1 or 2 errors (Cullan et al 2020). More recently, Kjetil Voje has further developed Hunt's approach by incorporating a set of misspecification tests to help determine whether the best-fitting model's statistical assumptions are also consistent with the observed data (Voje 2018). Finally, Cullan et al (2020) have introduced a novel method for hypothesis testing that enables control of the Type 1 error rate but does not require null models.…”
Section: Unequal Evidence In Paleobiology: a Motivating Examplementioning
confidence: 99%