2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023
DOI: 10.1109/wacv56688.2023.00546
|View full text |Cite
|
Sign up to set email alerts
|

GLAD: A Global-to-Local Anomaly Detector

Abstract: HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des labor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 46 publications
0
3
0
Order By: Relevance
“…Implementation details. We trained a GMM on each sequence setting K = 1000 initial Gaussian distributions and we let GLAD remove unnecessary Gaussians, as proposed by its authors [34]. For training, we selected the first few hundred subsequent frames of each scene without any obvious anomalies.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Implementation details. We trained a GMM on each sequence setting K = 1000 initial Gaussian distributions and we let GLAD remove unnecessary Gaussians, as proposed by its authors [34]. For training, we selected the first few hundred subsequent frames of each scene without any obvious anomalies.…”
Section: Methodsmentioning
confidence: 99%
“…We model the extracted deep representations with a mixture of Gaussians to assess the likelihood of an image patch to be a part of the background. For that we extend the GLAD [34] framework to the background modeling of videos. This requires no dense annotations, only a selection of training frames with none or few anomalies present, thus we label our approach as weakly supervised.…”
Section: Background Feature Modelingmentioning
confidence: 99%
See 1 more Smart Citation