2020
DOI: 10.3233/jifs-179720
|View full text |Cite
|
Sign up to set email alerts
|

Image forgery detection using deep textural features from local binary pattern map

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…Using CNNs for the imagenet large scale visual recognition challenge (ILSVRC) [31] is crucial to come out of deep learning. These CNNs that have already been trained have learned to describe a wide range of images in detail [32]. Hence, this paper uses DenseNet121 as an input layer to pull out the deep embedding texture features from the IR-CLBP-MC image made by the proposed method in this research.…”
Section: Feature Selection and Extraction Using Cnnmentioning
confidence: 99%
“…Using CNNs for the imagenet large scale visual recognition challenge (ILSVRC) [31] is crucial to come out of deep learning. These CNNs that have already been trained have learned to describe a wide range of images in detail [32]. Hence, this paper uses DenseNet121 as an input layer to pull out the deep embedding texture features from the IR-CLBP-MC image made by the proposed method in this research.…”
Section: Feature Selection and Extraction Using Cnnmentioning
confidence: 99%
“…Wu et al [19] proposed an end-to-end detection network that defined the splice localization problem as a local anomaly-detection problem. Remya and Wilscy [20] used a pretrained convolutional neural network to extract deep texture features from a rotation-invariant local binary pattern (RI-LBP) map of chroma images, and then trained a quadratic support vector machine (SVM), which is a classifier that improves the detection accuracy of fake images. El-Latif et al [21] proposed an image-stitching detection algorithm based on deep learning and wavelet transform.…”
Section: Introductionmentioning
confidence: 99%