2021
DOI: 10.1002/int.22629
|View full text |Cite
|
Sign up to set email alerts
|

Domain adaptation via incremental confidence samples into classification

Abstract: To accurately recognize similar objects in different domains, the key for domain adaptation is to learn new metrics so as to minimize the discrepancy of two domains. Recent works utilize joint probability domain adaptation to tackle this problem but get poor performance due to poor discriminability or transferability of data sets. The inaccurate pseudo‐labeling in the feature subspace can lead to a chain reaction of errors during iterations, and varieties of the joint probability distribution values further ag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(3 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…(10) PDALC [50]: PDALC integrates class discriminant information and joint distribution alignment to learn a shared subspace, while the label correction mechanism facilitates the alignment of conditional distribution. (11) ICSC [51]: ICSC learns a projection matrix to minimize probability distribution differences, reduce intra-class distances, and increase inter-class distances. Finally, incremental confidence samples are employed to reduce classification errors in each iteration.…”
Section: Comparative Methodsmentioning
confidence: 99%
“…(10) PDALC [50]: PDALC integrates class discriminant information and joint distribution alignment to learn a shared subspace, while the label correction mechanism facilitates the alignment of conditional distribution. (11) ICSC [51]: ICSC learns a projection matrix to minimize probability distribution differences, reduce intra-class distances, and increase inter-class distances. Finally, incremental confidence samples are employed to reduce classification errors in each iteration.…”
Section: Comparative Methodsmentioning
confidence: 99%
“…This issue significantly affects the feature adaptation process. Previous studies (Wang and Breckon, 2020 ; Teng et al, 2022 ) have shown that gradually predicting pseudo-labels from high-quality to low-quality samples yields better results than directly predicting pseudo labels for all samples from the target domain during the feature adaptation process. Therefore, to ensure a well-performing iterative process of feature adaptation, we employ a selective pseudo-labeling strategy.…”
Section: The Proposed Methodsmentioning
confidence: 99%
“…Once the balance parameters are not selected properly, errors can accumulate during the iteration process, ultimately affecting the model's ability to identify faults. Therefore, to avoid the negative impact of the fixed equilibrium parameter u on the fault diagnosis model, we use an adaptive adjustment method using the covariance matrix as a class distance measure [39] in this paper to solve such problems.…”
Section: Construct the Objective Function For Sfddamentioning
confidence: 99%