Video summarization aims to find a compact representation of input videos. The method finds out interesting parts of the video by discarding the remaining parts of the video. The abstracts thus generated enhances browsing and retrieval of video data. The quality of summaries generated by video summarization algorithms can be improved if the redundant frames in the input video are taken care of before summarization. This paper presents a novel domain-independent method for redundancy elimination from input videos before summarization maintaining keyframes in the original video. The frames of input video are first presampled by selecting two frames in one second. The flow vectors between consecutive frames are computed using SIFT Flow algorithm. The magnitude of flow vectors at each pixel position of the frame are summed up to find the displacement magnitude between the consecutive frames. The redundant frames are filtered out based on local averaging of the displacement values. The evaluation of the method is done using two standard datasets namely VSUMM and OVP. The results demonstrate that an average reduction rate of 97.64% is achieved consistently on videos of all categories. The method also gives superior results compared to other state-of-the-art redundancy elimination methods for video summarization
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.