2022
DOI: 10.3390/fi14040110
|View full text |Cite
|
Sign up to set email alerts
|

Decorrelation-Based Deep Learning for Bias Mitigation

Abstract: Although deep learning has proven to be tremendously successful, the main issue is the dependency of its performance on the quality and quantity of training datasets. Since the quality of data can be affected by biases, a novel deep learning method based on decorrelation is presented in this study. The decorrelation specifically learns bias invariant features by reducing the non-linear statistical dependency between features and bias itself. This makes the deep learning models less prone to biased decisions by… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…The choice of feature F depends on the type of bias mitigation technique and model architecture. As stated in our previous work [32], the bias variable B should provide more precise bias-relevant information.…”
Section: Discussionmentioning
confidence: 77%
See 1 more Smart Citation
“…The choice of feature F depends on the type of bias mitigation technique and model architecture. As stated in our previous work [32], the bias variable B should provide more precise bias-relevant information.…”
Section: Discussionmentioning
confidence: 77%
“…Scanner dependencies on model performance are mitigated by decorrelating scanner configuration information from learned features to create scanner-invariant features. The proposed method is simple yet more effective and can be applied to the mitigation of a wide range of data bias, confounders, class bias, or a combination of all bias issues, as shown in our previous work [32]. The proposed DcCNN framework in this study, on the other hand, is specifically designed to address scanner dependency and imbalance issues that are common in large clinical trials involving neuroimaging data.…”
Section: Methodsmentioning
confidence: 99%
“…The choice of feature F depends on the type of bias mitigation technique and model architecture. As stated in our previous work [35], the bias variable B should provide more precise bias-relevant information.…”
Section: Discussionmentioning
confidence: 77%
“…Scanner dependencies in model performance are mitigated by decorrelating scanner configuration information from learned features to create scanner-invariant features. The proposed method is simple yet more effective and can be applied to the mitigation of a wide range of data-bias, confounder, class-bias, or a combination of all bias issues, as shown in our previous work [35]. The proposed DcCNN framework in this study, on the other hand, is specifically designed to address scanner-dependency and imbalance issues that are common in large clinical trials involving neuroimaging data.…”
Section: Methodsmentioning
confidence: 97%