A trend in recent news video retrieval systems is to exploit contextual cues to improve search precision. Compared to using information only at the shot-level, the inclusion of story-level contextual cues can intuitively broaden query coverage and facilitate multimodal retrieval. This paper examines further the utility of story-level context in two aspects. Firstly, by apply a story-level matching against a parallel news corpus, better retrieval precision is achieved in both text-based query and document expansion. Secondly, a story-level semantic concept model is better apt at modeling visual similarity, resulting in improved retrieval precision. We quantify the case for story-level processing by its positive impact on the TRECVID-2005 dataset.Index Terms-video retrieval, query and document expansion, mutual information, semantic concept models