Recently, video watermarking has received much consideration. Several applications in
a variety of domains have been implemented, and many are progressing. This paper intends to
formulate a novel video watermarking framework that includes three stages: (i) Optimal video
frame prediction, (ii) A watermark embedding process, and (iii) A watermark extraction process.
In the proposed model, the optimal frame prediction is carried out using the deep belief network
(DBN) framework. Initially, randomly chosen frames from each video are used as the input to a
genetic algorithm (GA) model that optimally chooses the frames such that the peak signal-tonoise ratio (PSNR) should be maximal. The frames are assigned with a label of one or zero, where
a label of one denotes a frame with better PSNR (can select for embedding process) and a label of
zero denotes the frame with reduced PSNR (cannot be used for embedding). Consequently, a data
library is formed from the obtained results, where each video frame is determined with their graylevel cooccurrence matrix (GLCM) features and labels (can embed or not), which is then trained
in the DBN framework, from which the optimal frames can be predicted efficiently while testing.
Furthermore, the watermark embedding process and watermark extraction process are carried out,
and thus, the image can be embedded within the optimally selected frames.