2017
DOI: 10.1002/9781119324560.ch4
|View full text |Cite
|
Sign up to set email alerts
|

Single Molecule Data Analysis: An Introduction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
97
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
7

Relationship

5
2

Authors

Journals

citations
Cited by 40 publications
(98 citation statements)
references
References 288 publications
(579 reference statements)
1
97
0
Order By: Relevance
“…We review both frequentist and Bayesian inference methods (sections 3.2.1 and 3.2.2) and refer the reader to a recent review for information theoretic inference schemes. 59 …”
Section: Data-driven Modeling: Key Conceptsmentioning
confidence: 99%
See 2 more Smart Citations
“…We review both frequentist and Bayesian inference methods (sections 3.2.1 and 3.2.2) and refer the reader to a recent review for information theoretic inference schemes. 59 …”
Section: Data-driven Modeling: Key Conceptsmentioning
confidence: 99%
“…Here we review two approaches to parameter inference, frequentist or Bayesian, and we refer the reader to Tavakoli et al 59 and Pressé et al 113 for information theoretic approaches.…”
Section: Data-driven Modeling: Key Conceptsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, we cannot simply maximize our likelihood, since, as is well known, likelihood maximization would overfit the data by favoring too many steps in the photobleaching time trace (Kalafut and Visscher, 2008). Typically, to prevent overfitting, model selection criteria—including the SIC (Schwarz, 1978; Cavanaugh and Neath, 1999), the Akaike information criterion (AIC; Akaike, 1974), and others (Kadane and Lazar, 2004; reviewed most recently in Tavakoli et al. , 2016)—compare different models on the basis of 1) their fit to the data and 2) the number of parameters (i.e., the complexity) of the model (Claeskens and Hjort, 2008; Tavakoli et al.…”
Section: Methodsmentioning
confidence: 99%
“…The idea behind model averaging is simple: the likelihood function depends on a number of parameters (models), and, by averaging over these models, we account for models that are both good and bad fits to the data. Fundamentally, this process penalizes complexity by weighing into consideration models that are poor fits to the data (Schwarz, 1978; Tavakoli et al. , 2016).…”
Section: Methodsmentioning
confidence: 99%