2020
DOI: 10.3390/e22111322
|View full text |Cite
|
Sign up to set email alerts
|

Fractional Dynamics Identification via Intelligent Unpacking of the Sample Autocovariance Function by Neural Networks

Abstract: Many single-particle tracking data related to the motion in crowded environments exhibit anomalous diffusion behavior. This phenomenon can be described by different theoretical models. In this paper, fractional Brownian motion (FBM) was examined as the exemplary Gaussian process with fractional dynamics. The autocovariance function (ACVF) is a function that determines completely the Gaussian process. In the case of experimental data with anomalous dynamics, the main problem is first to recognize the type of an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 74 publications
0
10
0
Order By: Relevance
“…Following Ref. [ 77 ], we decided to check if the autocorrelation function taken as additional input improves the accuracy of the model. We combined the raw trajectories with their autocorrelations calculated at lags 8, 16, and 24 into a single tensor structure and used it as input to the model.…”
Section: Resultsmentioning
confidence: 99%
“…Following Ref. [ 77 ], we decided to check if the autocorrelation function taken as additional input improves the accuracy of the model. We combined the raw trajectories with their autocorrelations calculated at lags 8, 16, and 24 into a single tensor structure and used it as input to the model.…”
Section: Resultsmentioning
confidence: 99%
“…The architecture of the proposed neural network will be simple and similar to one in [34]. However, one significant change will be done that allows for variable input vector size.…”
Section: Neural Network Algorithmmentioning
confidence: 99%
“…The RNN architecture is especially needed because with multilayer perceptron (MLP) [45] (used in [34]) the model's error does not plateau (Fig. 3)-with increasing e T the error is decreasing with log-linear rate (from certain e T ).…”
Section: Neural Network Algorithmmentioning
confidence: 99%
See 2 more Smart Citations