2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA) 2020
DOI: 10.1109/iccca49541.2020.9250825
|View full text |Cite
|
Sign up to set email alerts
|

Pattern-based Comparative Analysis of Techniques for Missing Value Imputation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…Datawig makes predictions for missing Likert-type items by using gradient-boosted trees, a type of ensemble learning method that builds multiple decision trees and combines their predictions to make a final prediction (Sadhu et al, 2020). When predicting missing Likert-type items, the gradient-boosted trees learn the relationships between the observed values and the target variable, taking into account the ordinal information.…”
Section: Empirical Examplementioning
confidence: 99%
“…Datawig makes predictions for missing Likert-type items by using gradient-boosted trees, a type of ensemble learning method that builds multiple decision trees and combines their predictions to make a final prediction (Sadhu et al, 2020). When predicting missing Likert-type items, the gradient-boosted trees learn the relationships between the observed values and the target variable, taking into account the ordinal information.…”
Section: Empirical Examplementioning
confidence: 99%
“…The k-nearest neighbor (kNN) algorithm is an instance-based estimation procedure using the ‘k’-nearest neighbor values of the feature. The value that is imputed is generally the mean of the k-nearest neighbor values, and they can also be modified accordingly [ 23 ]. The kNN imputation method has been successfully applied in the real-time processing of data due to its simplicity and high accuracy [ 18 , 24 ].…”
Section: Introductionmentioning
confidence: 99%
“…Several studies have published the missing data problem in the ML domain [7], [11]- [14]. However, these studies have been dispersed among different journals and conference proceedings.…”
Section: Introductionmentioning
confidence: 99%