2023
DOI: 10.3390/a16070310
|View full text |Cite
|
Sign up to set email alerts
|

Probability Density Estimation through Nonparametric Adaptive Partitioning and Stitching

Abstract: We present a novel nonparametric adaptive partitioning and stitching (NAPS) algorithm to estimate a probability density function (PDF) of a single variable. Sampled data is partitioned into blocks using a branching tree algorithm that minimizes deviations from a uniform density within blocks of various sample sizes arranged in a staggered format. The block sizes are constructed to balance the load in parallel computing as the PDF for each block is independently estimated using the nonparametric maximum entropy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…Thus, they are no longer valid when dealing with very large datasets. Several recent approaches can be found in the literature that adapt algorithms when data or even model parameters do not fit in the memory (see, e.g., [6][7][8]).…”
Section: Parameter Estimation From Large-scale Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…Thus, they are no longer valid when dealing with very large datasets. Several recent approaches can be found in the literature that adapt algorithms when data or even model parameters do not fit in the memory (see, e.g., [6][7][8]).…”
Section: Parameter Estimation From Large-scale Datasetsmentioning
confidence: 99%
“…The iterative nature of the entire dataset of the algorithms that are usually applied makes them extremely difficult to parallelise and requires new approaches. For example, in [7], a new methodology for estimating a probability density function for large datasets, using adaptive partitioning and stitching is proposed. A scaling-up strategy is also required for the learning algorithm of the LHM.…”
Section: Parameter Estimation From Large-scale Datasetsmentioning
confidence: 99%
See 1 more Smart Citation