2020
DOI: 10.48550/arxiv.2009.02967
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stochastic-YOLO: Efficient Probabilistic Object Detection under Dataset Shifts

Tiago Azevedo,
René de Jong,
Matthew Mattina
et al.

Abstract: In image classification tasks, the evaluation of models' robustness to increased dataset shifts with a probabilistic framework is very well studied. However, Object Detection (OD) tasks pose other challenges for uncertainty estimation and evaluation. For example, one needs to evaluate both the quality of the label uncertainty (i.e., what?) and spatial uncertainty (i.e., where?) for a given bounding box, but that evaluation cannot be performed with more traditional average precision metrics (e.g., mAP). In this… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(16 citation statements)
references
References 14 publications
0
16
0
Order By: Relevance
“…We consider different ways to apply DropBlock in YOLO models and show how it can improve the uncertainty modeling, generalization performance, and calibration capabilities of the YOLO models. We compare the proposed MC DropBlock with various baselines including the ones considering drop block only at training time (training time dropBlock) or only at inference or test time (inference time dropBlock), and with existing approaches such as Dropout based [37] and Gaussian Yolo models [38]. All the experiments are trained with NVIDIA V100 GPUs.…”
Section: Methodsmentioning
confidence: 99%
“…We consider different ways to apply DropBlock in YOLO models and show how it can improve the uncertainty modeling, generalization performance, and calibration capabilities of the YOLO models. We compare the proposed MC DropBlock with various baselines including the ones considering drop block only at training time (training time dropBlock) or only at inference or test time (inference time dropBlock), and with existing approaches such as Dropout based [37] and Gaussian Yolo models [38]. All the experiments are trained with NVIDIA V100 GPUs.…”
Section: Methodsmentioning
confidence: 99%
“…For SMCDO we use Wide-ResNet-20-k with a widening factor k = 3 and M = 3 samples. Dropout is added to approximately the second halve of the model (Convolution layer 13 and deeper), since previous experiments have shown that this configuration yields a good performance to latency trade-off [14]. Dropout layer are always positioned before a convolution layer.…”
Section: A Experimental Setup For Cifar-10mentioning
confidence: 99%
“…Although the design can achieve a high throughput, the restriction of the nonlinear activation functions limits generality for different application scenarios. In [5], the authors propose software-based intermediate-layer caching (IC), evaluated in last layer BNNs.…”
Section: Related Work a Field Programmable Gate Array-based Acceleratorsmentioning
confidence: 99%
“…To further improve the overall hardware performance, we propose a hardware implementation of IC technique [5] to decrease the required compute and the number of memory accesses. An example of using IC is illustrated in Figure 4, where the NN contains two layers and it only requires the user to apply the dropout mask and run the last layer S times when the partial Bayesian technique is applied.…”
Section: Intermediate-layer Caching (Ic)mentioning
confidence: 99%
See 1 more Smart Citation