2022
DOI: 10.1007/s10994-022-06247-z
|View full text |Cite
|
Sign up to set email alerts
|

Transfer and share: semi-supervised learning from long-tailed data

Abstract: Long-Tailed Semi-Supervised Learning (LTSSL) aims to learn from class-imbalanced data where only a few samples are annotated. Existing solutions typically require substantial cost to solve complex optimization problems, or class-balanced undersampling which can result in information loss. In this paper, we present the TRAS (TRAnsfer and Share) to effectively utilize long-tailed semi-supervised data. TRAS transforms the imbalanced pseudo-label distribution of a traditional SSL model via a delicate function to e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 18 publications
0
7
0
Order By: Relevance
“…Chang et al [25] proposed a joint resampling strategy, RIO. Wei [26] introduced open-sampling, a method that leverages out-of-distribution data to rebalance class priors and encourage separable representations. Yu et al [27] revived the use of balanced undersampling, achieving higher accuracy for worst-performing categories.…”
Section: Long-tailed Visual Recognitionmentioning
confidence: 99%
“…Chang et al [25] proposed a joint resampling strategy, RIO. Wei [26] introduced open-sampling, a method that leverages out-of-distribution data to rebalance class priors and encourage separable representations. Yu et al [27] revived the use of balanced undersampling, achieving higher accuracy for worst-performing categories.…”
Section: Long-tailed Visual Recognitionmentioning
confidence: 99%
“…Evaluation Measures Following (Yang et al 2021;Wang et al 2022), we use the below common metrics for OOD detection and ID classification: (1) FPR is the false posi- Implementation Details We compared our approach COCL with several existing OOD detection methods on long-tailed training sets, including classical methods MSP (Hendrycks and Gimpel 2016), OE (Hendrycks, Mazeika, and Dietterich 2018), EnergyOE (Liu et al 2020), and very recently published methods PASCL (Wang et al 2022), OS (Wei et al 2022a), Class Prior (Jiang et al 2023), andBERL (Choi, Jeong, andChoi 2023). The OCL method in our results is a baseline that is trained based on Eq.…”
Section: Experiments Experiments Settingsmentioning
confidence: 99%
“…Compared to OOD detection on balanced ID datasets, significantly less work has been done in the LTR scenarios. Recent studies (Wang et al 2022;Wei et al 2022a;Jiang et al 2023;Choi, Jeong, and Choi 2023) are among the seminal works exploring OOD detection in LTR. Current methods in this line focus on distinguishing OOD samples from ID samples using an approach called outlier exposure (OE) (Hendrycks, Mazeika, and Dietterich 2018) that fits auxiliary/pseudo OOD data to a prior distribution (e.g., uniform distribution) of ID data.…”
Section: Introductionmentioning
confidence: 99%
“…Mahalanobis distance-based score [120], Energy-based score [31,121,122], ReAct [123], GradNorm score [117], and non-parametric KNN-based score [124,125]. 2) Some works address the out-of-distribution detection problem by training-time regularization [3,31,[126][127][128][129][130][131][132][133][134][135]. For example, models are encouraged to give predictions with uniform distribution [3,126] or higher energies [31,[135][136][137][138] for outliers.…”
Section: Natural Robustness Of Machine Learningmentioning
confidence: 99%
“…2) Some works address the out-of-distribution detection problem by training-time regularization [3,31,[126][127][128][129][130][131][132][133][134][135]. For example, models are encouraged to give predictions with uniform distribution [3,126] or higher energies [31,[135][136][137][138] for outliers.…”
Section: Related Workmentioning
confidence: 99%