Video anomaly detection is an essential task because of its numerous applications in various areas. Because of the rarity of abnormal events and the complicated characteristic of videos, video anomaly detection is challenging and has been studied for a long time. In this paper, we propose a semi-supervised approach with a dual discriminator-based generative adversarial network structure. Our method considers more motion information in video clips compared with previous approaches. Specifically, in the training phase, we predict future frames for normal events via a generator and attempt to force the predicted frames to be similar to their ground truths. In addition, we utilize both a frame discriminator and motion discriminator to adverse the generator to generate more realistic and consecutive frames. The frame discriminator attempts to determine whether the input frames are generated or original frames sampled from the normal video. The motion discriminator attempts to determine whether the given optical flows are real or fake. Fake optical flows are estimated from generated frames and adjacent frames, and real optical flows are estimated from the real frames sampled from original videos. Then, in the testing phase, we evaluate the quality of predicted frames to obtain the regular score, and we consider those frames with lower prediction qualities as abnormal frames. The results of experiments on three publicly available datasets demonstrate the effectiveness of our proposed method.
Saliency detection has become an active topic in both computer vision and multimedia fields. In this paper, we propose a novel computational model for saliency detection by integrating the holistic centerdirectional map with the principal local color contrast (PLCC) map. In the proposed framework, perceptual directional patches are firstly detected based on discrete wavelet frame transform (DWFT) and sparsity criterion, then the center of the spatial distribution of the extracted directional patches are utilized to locate the salient object in an image. Meanwhile, we proposed an efficient local color contrast method, called principal local color contrast (PLCC), to compute the color contrast between the salient object and the image background, which is sufficient to highlight and separate salient objects from complex background while dramatically reduce the computational cost. Finally, by incorporating the complementary visual cues of the global center-directional map with the PLCC map, a final compounded saliency map can be generated. Extensive experiments performed on three publicly available image databases, verify that the proposed scheme is able to achieve satisfactory results compared to other stateof-the-art saliency-detection algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.