2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01392
|View full text |Cite
|
Sign up to set email alerts
|

Towards Total Recall in Industrial Anomaly Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
285
0
1

Year Published

2022
2022
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 528 publications
(287 citation statements)
references
References 25 publications
1
285
0
1
Order By: Relevance
“…Note that we could reach the reported AUC of PatchCore only in a single fold of cross-validation, but not on average over different folds of cross-validation and, therefore, report slightly lower average scores than in [ 22 ]. The same appears to be the case for CutPaste and we could only touch the reported AUC.…”
Section: Local Anomaly Scorementioning
confidence: 67%
See 2 more Smart Citations
“…Note that we could reach the reported AUC of PatchCore only in a single fold of cross-validation, but not on average over different folds of cross-validation and, therefore, report slightly lower average scores than in [ 22 ]. The same appears to be the case for CutPaste and we could only touch the reported AUC.…”
Section: Local Anomaly Scorementioning
confidence: 67%
“…The required pre-training of the networks is always done by using the well-known ImageNet dataset, where all architectures reach a test set accuracy of around 85%. Note, we concatenated two adjacent layers of EfficientNet-B4 as the layers are relatively low-dimensional, which improves the performance slightly (see, e.g., [ 22 ]). Due to the pooling layers, the spatial resolution of the feature map decreases with the depth of the network, i.e., deeper layers have lower spatial resolution.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…With the vigorous development of deep learning (DL) technology, 8 many scholars and institutions have studied anomaly detection methods based on DL, which can be roughly divided into reconstruction‐based 9–13 and pretrained networks 14–17 methods. In the reconstruction‐based method, only anomaly‐free samples are used for training in the training stage, and anomaly detection is performed in the inference stage according to the poor reconstruction effect for unknown anomaly regions.…”
Section: Introductionmentioning
confidence: 99%
“…Defard et al 15 used pretrained CNN to generate patch embedding vectors, exploited multivariate Gaussian distribution to obtain the probabilistic representation of the normal class, and used Mahalanobis distance to calculate anomaly scores of images to achieve anomaly localization. Roth et al 16 adopted a greedy coreset mechanism to reduce the memory bank extracted by the pretrained network, and then obtained anomaly scores according to the distance between the features in the memory bank and the patch features of the test image. Rudolph et al 17 proposed to input normal images of multiple scales into a pretrained feature extraction network, and used the normalizing flows to perform maximum likelihood training on the extracted features to obtain the distribution of normal images, and then calculated log‐likelihood to determine whether it is abnormal.…”
Section: Introductionmentioning
confidence: 99%