2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00124
|View full text |Cite
|
Sign up to set email alerts
|

I Bet You Are Wrong: Gambling Adversarial Networks for Structured Semantic Segmentation

Abstract: Adversarial training has been recently employed for realizing structured semantic segmentation, in which the aim is to preserve higher-level scene structural consistencies in dense predictions. However, as we show, value-based discrimination between the predictions from the segmentation network and ground-truth annotations can hinder the training process from learning to improve structural qualities as well as disabling the network from properly expressing uncertainties.In this paper, we rethink adversarial tr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 14 publications
(13 citation statements)
references
References 46 publications
0
13
0
Order By: Relevance
“…The histogram shows that the confidence (inverse of uncertainty) from our hyperbolic approach clearly correlates with the distance to the nearest boundary. This result highlights that hyperbolic uncertainty provides a direct clue about which regions in the image contain boundaries between images, which can, in turn, be used to determine whether to ignore such regions or to pinpoint where to optimize further as boundary areas commonly contain many errors [37]. We provide the same experiment for 256 embedding dimensions in the supplementary materials, which follows the same distribution.…”
Section: Uncertainty and Boundary Information For Freementioning
confidence: 93%
“…The histogram shows that the confidence (inverse of uncertainty) from our hyperbolic approach clearly correlates with the distance to the nearest boundary. This result highlights that hyperbolic uncertainty provides a direct clue about which regions in the image contain boundaries between images, which can, in turn, be used to determine whether to ignore such regions or to pinpoint where to optimize further as boundary areas commonly contain many errors [37]. We provide the same experiment for 256 embedding dimensions in the supplementary materials, which follows the same distribution.…”
Section: Uncertainty and Boundary Information For Freementioning
confidence: 93%
“…In practice, the generator loss is often complemented with the pixelwise loss from Eq. ( 2) to improve training stability and prediction quality [33][34][35]. Even though the mixed supervision of adversarial and cross entropy losses leads to improved empirical results, we argue that the two objective functions are not well aligned in the presence of noisy data.…”
Section: Preliminariesmentioning
confidence: 93%
“…Inspired by early approaches from model calibration literature [49,66,67,43,44], a number of methods propose endowing the task network with an error prediction branch allowing self-assessment of predictive performance. This branch can be trained jointly with the main network [13,64], however better learning stability and results are achieved with two-stage sequential training [10,22,4,52] Our ObsNet also uses an auxiliary network and is trained in two stages allowing it to learn from the failure modes of the task network. While [10,22,4,52] focus on in-distribution errors, we address OOD detection for which there is no available training data.…”
Section: Anomaly Detection By Reconstructionmentioning
confidence: 99%
“…This branch can be trained jointly with the main network [13,64], however better learning stability and results are achieved with two-stage sequential training [10,22,4,52] Our ObsNet also uses an auxiliary network and is trained in two stages allowing it to learn from the failure modes of the task network. While [10,22,4,52] focus on in-distribution errors, we address OOD detection for which there is no available training data. In contrast with these methods that struggle with the lack of sufficient negative data to learn from, we devise an effective strategy to generate failures that further enable generalization to OOD detection.…”
Section: Anomaly Detection By Reconstructionmentioning
confidence: 99%