2021
DOI: 10.1002/int.22792
|View full text |Cite
|
Sign up to set email alerts
|

Document images forgery localization using a two‐stream network

Abstract: Document images often contain essential and sensitive information. With image editing software, one can easily manipulate the semantic meaning of the document image by copy-move, splicing, and removal, which causes many security issues. Hence, the document image forgery detection and localization are of great significance. In this paper, a novel two-stream network is proposed to detect and locate the forgery regions of document images. One stream captures forgery traces from the spatial information, including … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 45 publications
0
3
0
Order By: Relevance
“…With the increasing application of deep learning techniques in image processing tasks, some researchers [5] have attempted to perform document image forgery detection by constructing deep network models. Xu et al [6] proposed a new dual-stream network architecture that combines residual filters. In this network, the dual streams are responsible for capturing forged traces in the image and abnormal features between adjacent pixels.…”
Section: Introductionmentioning
confidence: 99%
“…With the increasing application of deep learning techniques in image processing tasks, some researchers [5] have attempted to perform document image forgery detection by constructing deep network models. Xu et al [6] proposed a new dual-stream network architecture that combines residual filters. In this network, the dual streams are responsible for capturing forged traces in the image and abnormal features between adjacent pixels.…”
Section: Introductionmentioning
confidence: 99%
“…FL cooperates the learning task under the orchestration of a central server while raw client data are shared neither with the server nor among distinct clients 3–5 . With the increasing attention paid to artificial intelligence security and privacy in ML, FL has become a hot research topic over the past half decade 6–13 …”
Section: Introductionmentioning
confidence: 99%
“…[3][4][5] With the increasing attention paid to artificial intelligence security and privacy in ML, FL has become a hot research topic over the past half decade. [6][7][8][9][10][11][12][13] Unlike traditional centralized ML paradigms, in each round of FL training, the clients train local models on their own data sets, and upload the parameters of the trained model to the server, rather than sharing the raw local data. Then the server aggregates the local models to update the global model and sends it to the clients for the next round of training.…”
mentioning
confidence: 99%