2022
DOI: 10.48550/arxiv.2205.15239
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Conformal Credal Self-Supervised Learning

Abstract: In semi-supervised learning, the paradigm of self-training refers to the idea of learning from pseudo-labels suggested by the learner itself. Across various domains, corresponding methods have proven effective and achieve state-of-the-art performance. However, pseudo-labels typically stem from ad-hoc heuristics, relying on the quality of the predictions though without guaranteeing their validity. One such method, so-called credal self-supervised learning, maintains pseudo-supervision in the form of sets of (in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 29 publications
0
1
0
Order By: Relevance
“…They have attracted interest in both the theoretical and application-oriented literature due to their flexibility as well as for the rich connections with convex analysis [12], optimization [2,35] and statistics [37]. Also in the context of machine learning (ML), credal sets and related models have recently attracted interest as a way to model weak supervision information in a variety of learning settings, including self-supervised learning [22,21], learning from noisy data [23,24], and learning from imprecise data [11,15,22], a general family of settings that encompasses, among others, semi-supervised learning, superset learning [17,26] and fuzzy label learning [10,32]. In all of these settings the idea is to model the weak supervision by means of credal sets, that are assumed to represent the partial or noisy information available to the annotating agent that produced the data: this general framework for studying weakly supervised learning is called credal learning.…”
Section: Introductionmentioning
confidence: 99%
“…They have attracted interest in both the theoretical and application-oriented literature due to their flexibility as well as for the rich connections with convex analysis [12], optimization [2,35] and statistics [37]. Also in the context of machine learning (ML), credal sets and related models have recently attracted interest as a way to model weak supervision information in a variety of learning settings, including self-supervised learning [22,21], learning from noisy data [23,24], and learning from imprecise data [11,15,22], a general family of settings that encompasses, among others, semi-supervised learning, superset learning [17,26] and fuzzy label learning [10,32]. In all of these settings the idea is to model the weak supervision by means of credal sets, that are assumed to represent the partial or noisy information available to the annotating agent that produced the data: this general framework for studying weakly supervised learning is called credal learning.…”
Section: Introductionmentioning
confidence: 99%