In this paper, we propose a new method to model the temporal context for boosting video annotation accuracy. The motivation of our idea mainly comes from the fact that temporally continuous shots in video are generally with relevant content, so that the performance of video annotation could be comparably boosted by mining the temporal dependency between shots in video. Based on this consideration, we propose a temporal context model to mine the redundant information between shots. By connecting our model with conditional random field and borrowing the learning and inference approaches from it, we could obtain the refined probability of a concept occurring in the shot, which is the leverage of temporal context information and initial output of video annotation. Comparing with existing methods for temporal context mining of video annotation, our model could capture different kinds of shot dependency more accurately to improve the video annotation performance. Furthermore, our model is relatively simple and efficient, which is important for the applications which have large scale data to process. Extensive experimental results on the widely used TRECVID datasets exhibit the effectiveness of our method for improving video annotation accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.