2019
DOI: 10.26438/ijcse/v7i4.10601064
|View full text |Cite
|
Sign up to set email alerts
|

Credit Card Fraud Detection using Local Outlier Factor and Isolation Forest

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 40 publications
(22 citation statements)
references
References 0 publications
0
22
0
Order By: Relevance
“…As discussed in [42], the AUC metric for a credit card fraud transaction dataset in Table 2 are: (i) LOF 0.584; (ii) KNN 0.961 and i-Forest 0.951. In addition, in [40] there is a direct comparison between the LOF algorithm and the i-forest one using a financial dataset, which is different from the dataset used in our study. So, according to the authors of [40], the LOF method for outlier values features 0.28 precision and the i-forest method 0.02.…”
Section: Comparison Of Different Algorithms Datasets and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…As discussed in [42], the AUC metric for a credit card fraud transaction dataset in Table 2 are: (i) LOF 0.584; (ii) KNN 0.961 and i-Forest 0.951. In addition, in [40] there is a direct comparison between the LOF algorithm and the i-forest one using a financial dataset, which is different from the dataset used in our study. So, according to the authors of [40], the LOF method for outlier values features 0.28 precision and the i-forest method 0.02.…”
Section: Comparison Of Different Algorithms Datasets and Resultsmentioning
confidence: 99%
“…In addition, in [40] there is a direct comparison between the LOF algorithm and the i-forest one using a financial dataset, which is different from the dataset used in our study. So, according to the authors of [40], the LOF method for outlier values features 0.28 precision and the i-forest method 0.02. In order to be fair, a direct comparison of our results with the aforementioned studies as well as any other study it is neither just nor indicative because the dataset used in this study is not similar to the datasets used in other studies in terms of type or volume.…”
Section: Comparison Of Different Algorithms Datasets and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, they stated that IForest achieved AUROC 70% in high dimensional data while their dataset does not contain anomalies in the training samples. Meaning that they tested the models under semi-supervised anomaly detection type contrary to the authors in [16], [18].…”
Section: A Anomaly Detectionmentioning
confidence: 98%
“…On the other hand, the authors in [16], [17] used Local Outlier Factor (LOF) and Isolation Forest (IForest) techniques to detect the anomaly in a large scale data. Moreover, they used F1-score, Precision, and Recall as performance metrics except for Galante in [17] who added One-Class Support Vector Machines (OCSVM) as an additional technique to detect anomalies and he added AUROC as performance metrics.…”
Section: A Anomaly Detectionmentioning
confidence: 99%