2019
DOI: 10.1007/s00500-019-03968-7
|View full text |Cite|
|
Sign up to set email alerts
|

RETRACTED ARTICLE: A taxonomy on impact of label noise and feature noise using machine learning techniques

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
19
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 19 publications
0
19
0
Order By: Relevance
“…By operating in an agnostic manner to capture non-linear multi-dimensional interactions and infer the degree of class ownership, these tools may be better suited to explain the distinctions between structural classes than hard, binary decision-boundaries set by a-priori assumptions in classical hypothesis-testing approaches (Rutledge et al 2019 ; Li and Tong 2020 ). Decision-tree ML models are particularly favourable when seeking to explain variables of interests from non-normally distributed data such as self-reported independent subjective experiences and derive good explanatory value even in the presence of major scoring noise (Shanthini et al 2019 ). Together with their redeployable nature once trained, they be may useful tools to generalise measures of subjective effects.…”
Section: Introductionmentioning
confidence: 99%
“…By operating in an agnostic manner to capture non-linear multi-dimensional interactions and infer the degree of class ownership, these tools may be better suited to explain the distinctions between structural classes than hard, binary decision-boundaries set by a-priori assumptions in classical hypothesis-testing approaches (Rutledge et al 2019 ; Li and Tong 2020 ). Decision-tree ML models are particularly favourable when seeking to explain variables of interests from non-normally distributed data such as self-reported independent subjective experiences and derive good explanatory value even in the presence of major scoring noise (Shanthini et al 2019 ). Together with their redeployable nature once trained, they be may useful tools to generalise measures of subjective effects.…”
Section: Introductionmentioning
confidence: 99%
“…The attack impact of the entropy approach becomes more visible as the noise level rises, and it outperforms other methods, according to the data. In Shanthini et al's study, 51 the robustness of three boosting learners was tested on three medical data sets under the effect of features and label noise. The findings of the experiments reveal that label noise does significantly more harm than feature noise.…”
Section: Label-flipping Poisoning Attackmentioning
confidence: 99%
“…[12] also raised the question on the quality of data of NASA repository, but still, a few powerful machine learning-based defect prediction models are available such as [19,43,44,45]. There are two different types of noises in a defect dataset, and both of these noises [46] affect the performance over machine learning algorithms, first is class noise and second is feature noise. However, we have only considered class noise in this article.…”
Section: Quality Of Defect Datasetmentioning
confidence: 99%