As one of the important techniques for protecting the copyrights of digital images, content-based image copy detection has attracted a lot of attention in the past few decades. The traditional content-based copy detection methods usually extract local hand-crafted features and then quantize these features to visual words by the bag-of-visual-words (BOW) model to build an inverted index file for rapid image matching. Recently, deep learning features, such as the features derived from convolutional neural networks (CNN), have been proven to outperform the hand-crafted features in many applications of computer vision. However, it is not feasible to directly apply the existing global CNN features for copy detection, since they are usually sensitive to partial content-discarded attacks, such as copping and occlusion. Thus, we propose a local CNN feature-based image copy detection method with contextual hash embedding. We first extract the local CNN features from images and then quantize them to visual words to construct an index file. Then, as the BOW quantization process decreases the discriminability of these features to some extent, a contextual hash sequence is captured from a relatively large region surrounding each CNN feature and then is embedded into the index file to improve the feature’s discriminability. Extensive experimental results demonstrate that the proposed method achieves a superior performance compared to the related works in the copy detection task.
Digital image watermarking is one of the effective schemes to protect the copyrights of still images. However, the existing watermarking schemes are still not robust enough to the common geometric transformation attacks such as arbitrary rotation, scaling and shifting with desirable hiding capacity. To address this issue, we propose a robust watermarking scheme based on geometric correction codes (GCCs). In this scheme, the watermark and pre‐set GCCs are combined and embedded into a cover image to obtain the watermarked image. At the stage of watermark extraction, the watermarked image, under a variety of geometric transformation attacks, can be geometrically corrected by minimising the difference between the extracted and the original GCCs, then the watermark is extracted from the watermarked image. The experiments demonstrate that, compared to the typical watermarking schemes, the proposed scheme achieves much higher robustness to the common geometric transformation attacks and comparable invisibility with the same embedding capacity.
Recently, generative steganography that transforms secret information to a generated image has been a promising technique to resist steganalysis detection. However, due to the inefficiency and irreversibility of the secret-to-image transformation, it is hard to find a good trade-off between the information hiding capacity and extraction accuracy. To address this issue, we propose a secret-to-image reversible transformation (S2IRT) scheme for generative steganography. The proposed S2IRT scheme is based on a generative model, i.e., Glow model, which enables a bijective-mapping between latent space with multivariate Gaussian distribution and image space with a complex distribution. In the process of S2I transformation, guided by a given secret message, we construct a latent vector and then map it to a generated image by the Glow model, so that the secret message is finally transformed to the generated image. Owing to good efficiency and reversibility of S2IRT scheme, the proposed steganographic approach achieves both high hiding capacity and accurate extraction of secret message from generated image. Furthermore, a separate encoding-based S2IRT (SE-S2IRT) scheme is also proposed to improve the robustness to common image attacks. The experiments demonstrate the proposed steganographic approaches can achieve high hiding capacity (up to 4 bpp) and accurate information extraction (almost 100% accuracy rate) simultaneously, while maintaining desirable anti-detectability and imperceptibility.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.