Digital watermarking has been widely studied as a method of protecting the intellectual property rights of digital images, which are high value-added contents. Recently, studies implementing these techniques with neural networks have been conducted. This paper also proposes a neural network to perform a robust, invisible blind watermarking for digital images. It is a convolutional neural network (CNN)-based scheme that consists of pre-processing networks for both host image and watermark, a watermark embedding network, an attack simulation for training, and a watermark extraction network to extract watermark whenever necessary. It has three peculiarities for the application aspect: The first is the host image resolution’s adaptability. This is to apply the proposed method to any resolution of the host image and is performed by composing the network without using any resolution-dependent layer or component. The second peculiarity is the adaptability of the watermark information. This is to provide usability of any user-defined watermark data. It is conducted by using random binary data as the watermark and is changed each iteration during training. The last peculiarity is the controllability of the trade-off relationship between watermark invisibility and robustness against attacks, which provides applicability for different applications requiring different invisibility and robustness. For this, a strength scaling factor for watermark information is applied. Besides, it has the following structural or in-training peculiarities. First, the proposed network is as simple as the most profound path consists of only 13 CNN layers, which is through the pre-processing network, embedding network, and extraction network. The second is that it maintains the host’s resolution by increasing the resolution of a watermark in the watermark pre-processing network, which is to increases the invisibility of the watermark. Also, the average pooling is used in the watermark pre-processing network to properly combine the binary value of the watermark data with the host image, and it also increases the invisibility of the watermark. Finally, as the loss function, the extractor uses mean absolute error (MAE), while the embedding network uses mean square error (MSE). Because the extracted watermark information consists of binary values, the MAE between the extracted watermark and the original one is more suitable for balanced training between the embedder and the extractor. The proposed network’s performance is confirmed through training and evaluation that the proposed method has high invisibility for the watermark (WM) and high robustness against various pixel-value change attacks and geometric attacks. Each of the three peculiarities of this scheme is shown to work well with the experimental results. Besides, it is exhibited that the proposed scheme shows good performance compared to the previous methods.
This paper proposes a new adaptive watermarking scheme for digital images, which has the properties of blind extraction, invisibility, and robustness against attacks. The typical scheme for invisibility and robustness consisted of two main techniques: finding local positions to be watermarked and mixing or embedding the watermark into the pixels of the locations. In finding the location, however, our scheme uses a global space such that the multiple watermarking data is spread out over all four lowest-frequency subbands, resulting from n-level Mallat-tree 2D (dimensional) DWT, where n depends on the amount of watermarking data and the resolution of the host image, without any further process to find the watermarking locations. To embed the watermark data into the subband coefficients, weighting factors are used according to the type and energy of each subband to adjust the strength of the watermark, so we call this an adaptive scheme. To examine the ability of the proposed scheme, images with various resolutions are tested for various attacks, both pixel-value changing attacks and geometric attacks. With experimental results and comparison to the existing works we show that the proposed scheme has better performance than the previous works, except those which specialize in certain types of attacks.
This paper proposes a new hardware architecture to speed-up the digital hologram calculation by parallel computation. To realize it, we modify the computer-generated hologram (CGH) equation and propose a cell-based very large scale integrated circuit architecture. We induce a new equation to calculate the horizontal or vertical hologram pixel values in parallel, after finding the calculation regularity in the horizontal or vertical direction from the basic CGH equation. We also propose the architecture of the computer-generated hologram cell consisting of an initial parameter calculator and update-phase calculators based on the equation, and then implement them in hardware. Modifying the equation could simplify the hardware, and approximating the cosine function could optimize the hardware. In addition, we show the hardware architecture to parallelize the calculation in the horizontal direction by extending computer-generated holograms. In the experiments, we analyze hardware resource usage and the performance-capability characteristics of the look-up table used in the computer-generated hologram cell. These analyses make it possible to select the amount of hardware to the precision of the results. Here, we used the platform from our previous work for the computer-generated hologram kernel and the structure of the processor.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.