2022
DOI: 10.1007/978-3-031-19806-9_21
|View full text |Cite
|
Sign up to set email alerts
|

Difficulty-Aware Simulator for Open Set Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(14 citation statements)
references
References 34 publications
0
14
0
Order By: Relevance
“…Apart from the above baselines, other OSR methods are not considered for a fair comparison, since they require additional parameters to be optimized in the encoder-decoder structure [34,43,9,50] or the generator-discriminator structure [19,33,32].…”
Section: Baseline Methodsmentioning
confidence: 99%
“…Apart from the above baselines, other OSR methods are not considered for a fair comparison, since they require additional parameters to be optimized in the encoder-decoder structure [34,43,9,50] or the generator-discriminator structure [19,33,32].…”
Section: Baseline Methodsmentioning
confidence: 99%
“…Numerous researchers believe that modeling only known classes is insufficient and suggest that incorporating prior knowledge about unknown classes is necessary. Some approaches attempt to generate fake data [30], counterfactual images [31] or confused samples [17], [18]. Others [32], [33] introduce background classes or known unknown classes (KUC) to represent the unknown.…”
Section: B Open Set Recognitionmentioning
confidence: 99%
“…We compared our method against other state-of-the-art open set recognition approaches, including Softmax, Open-Max [25], ARPL [17], AKPF [18], Objectosphere Loss [32] and DIAS [30]. Accordingly, OpenMax calibrates the prediction probability by Weibull distribution.…”
Section: Comparison With the State-of-the-artsmentioning
confidence: 99%
“…To achieve this, we leverage the different layers in a network to generate samples of different difficulty levels, so that they can scatter both inside and outside of the known-classes distribution. Inspired by DiAS [18], we complement an architecture called Synthetic Unknown Attack Sample Generator (SUASG) to produce synthetic unknown samples for FAS during training (upper-left of Fig. 1).…”
Section: Domain Generalized Unknown Attacksmentioning
confidence: 99%
“…On the other hand, as the simulated data are synthesized from the original training data, the original labels could serve as the pseudo labels for the synthesized data. We thus smooth them to form the target probabilities of the synthetic data as suggested by [18]:…”
Section: Dataset Number Of Videos/imagesmentioning
confidence: 99%