2019 IEEE International Conference on Data Mining (ICDM) 2019
DOI: 10.1109/icdm.2019.00136
|View full text |Cite
|
Sign up to set email alerts
|

Multi-view Outlier Detection in Deep Intact Space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 21 publications
(12 citation statements)
references
References 28 publications
0
12
0
Order By: Relevance
“…This contribution focuses on how to fuse information for multi-perspective anomaly detection. Multiple perspectives or views are generally associated with data obtained from different modalities [15]. In the case of high dimensional data such as images, it also refers to different features that can be extracted for a single-view such as a scene descriptor (GIST, [16]), Histogram of Oriented Gradients (HOG), and Local Binary Patterns (LBP) [17].…”
Section: Related Workmentioning
confidence: 99%
“…This contribution focuses on how to fuse information for multi-perspective anomaly detection. Multiple perspectives or views are generally associated with data obtained from different modalities [15]. In the case of high dimensional data such as images, it also refers to different features that can be extracted for a single-view such as a scene descriptor (GIST, [16]), Histogram of Oriented Gradients (HOG), and Local Binary Patterns (LBP) [17].…”
Section: Related Workmentioning
confidence: 99%
“…It focuses on three outlier types, including attributes, class, and class-attributes outliers. There is a need to accomplish such multi-view outlier detection because most existing approaches consider part of the problem [112]. Figure 13 illustrates the three different types of outliers.…”
Section: Spatiotemporal Outlier Detectionmentioning
confidence: 99%
“…Perform P1M algorithm on Y to get η, U (t) , c (t) using ( 14) and (15), respectively; 4. Fix U (t) , c (t) , using (17) to get Wv of each view; 5. Fix Wv, using (12) to get Yv, and Yv of all views forms Y ; perform P1M algorithm on Y to get U (t+1) , c (t+1) using (15);…”
Section: 线性多视图原空间 -子空间 P1m (Lmo-sp1m)mentioning
confidence: 99%