2022
DOI: 10.48550/arxiv.2210.01189
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Supervised Contrastive Regression

Abstract: Deep regression models typically learn in an end-to-end fashion and do not explicitly try to learn a regression-aware representation. Their representations tend to be fragmented and fail to capture the continuous nature of regression tasks. In this paper, we propose Supervised Contrastive Regression (SupCR), a framework that learns a regression-aware representation by contrasting samples against each other based on their target distance. SupCR is orthogonal to existing regression models, and can be used in com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…Relevant research 12 has shown that end-to-end deep learning (DL) methods often neglect the optimization of the latent space, resulting in fragmented feature distributions. This issue becomes more pronounced in regression tasks and when dealing with activity cliffs (ACs).…”
Section: Activity Cliff Awareness Facilitates Metric Learning In Late...mentioning
confidence: 99%
“…Relevant research 12 has shown that end-to-end deep learning (DL) methods often neglect the optimization of the latent space, resulting in fragmented feature distributions. This issue becomes more pronounced in regression tasks and when dealing with activity cliffs (ACs).…”
Section: Activity Cliff Awareness Facilitates Metric Learning In Late...mentioning
confidence: 99%
“…AC refers to cases where structurally similar compounds have significant differences in activity, indicating the complexity and discontinuity of the molecular activity space. Current end-to-end DL methods neglect latent feature space optimization, resulting in fragmented feature distribution 12 . Samples with significant target differences may be close in feature space, leading to suboptimal predictions.…”
mentioning
confidence: 99%