2023
DOI: 10.1007/s11063-023-11332-y
|View full text |Cite
|
Sign up to set email alerts
|

Weighting Approaches in Data Mining and Knowledge Discovery: A Review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 361 publications
0
2
0
Order By: Relevance
“…The advancement of technology has facilitated the accumulation of vast amounts of data from various sources such as databases, web repositories, and files, necessitating robust tools for analysis and decision-making 1 , 2 . Data mining, employing techniques such as support vector machine (SVM), decision trees, neural networks, clustering, fuzzy logic, and genetic algorithms, plays a pivotal role in extracting information and uncovering hidden patterns within the data 3 , 4 .…”
Section: Introductionmentioning
confidence: 99%
“…The advancement of technology has facilitated the accumulation of vast amounts of data from various sources such as databases, web repositories, and files, necessitating robust tools for analysis and decision-making 1 , 2 . Data mining, employing techniques such as support vector machine (SVM), decision trees, neural networks, clustering, fuzzy logic, and genetic algorithms, plays a pivotal role in extracting information and uncovering hidden patterns within the data 3 , 4 .…”
Section: Introductionmentioning
confidence: 99%
“…Farfán and Cea [27] built a hydrological ensemble model based on artificial neural networks, enhancing the model results in terms of linear correlation, bias, and variability. The models mentioned above allocate weights based on the individual performance of a single model throughout the entire training phase [28]. However, in the context of runoff forecasting, the data characteristics of model predictions at different time points may exhibit varying correlations with training data features [29].…”
Section: Introductionmentioning
confidence: 99%