2021
DOI: 10.3390/e23121690
|View full text |Cite
|
Sign up to set email alerts
|

Perfect Density Models Cannot Guarantee Anomaly Detection

Abstract: Thanks to the tractability of their likelihood, several deep generative models show promise for seemingly straightforward but important applications like anomaly detection, uncertainty estimation, and active learning. However, the likelihood values empirically attributed to anomalies conflict with the expectations these proposed applications suggest. In this paper, we take a closer look at the behavior of distribution densities through the lens of reparametrization and show that these quantities carry less mea… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 66 publications
(110 reference statements)
0
0
0
Order By: Relevance
“…The use of anomaly detection algorithms, as opposed to probability laws, for predicting lifespan is justified by several factors [33][34][35]: Anomaly detection algorithms excel at identifying unusual patterns in data, which can be indicative of potential system failures or abnormalities. Unlike probability laws, these algorithms do not rely on prior knowledge or accumulated data,…”
Section: Plos Watermentioning
confidence: 99%
“…The use of anomaly detection algorithms, as opposed to probability laws, for predicting lifespan is justified by several factors [33][34][35]: Anomaly detection algorithms excel at identifying unusual patterns in data, which can be indicative of potential system failures or abnormalities. Unlike probability laws, these algorithms do not rely on prior knowledge or accumulated data,…”
Section: Plos Watermentioning
confidence: 99%
“…One of the continuing unsolved issues in deep generative modelling has been the out-of-distribution problem [171]. In particular, the model assigns high probability ("overconfidently wrong predictions") to datapoints that were unseen during training-which is both problematic and unintuitive [208].…”
Section: Out-of-distribution Problemmentioning
confidence: 99%