Recently, two new international image and video coding standards have been released: the wavelet-based JPEG2000 standard designed basically for compressing still images, and H.264/AVC, the newest generic standard for video coding. As part of the JPEG2000 suite, Motion-JPEG2000 extends JPEG2000 to a range of applications originally associated with a pure video coding standard like H.264/AVC. However, currently little is known about the relative performance of Motion-JPEG2000 and H.264/AVC in terms of coding efficiency on their overlapping domain of target applications requiring the random access of individual pictures. In this paper, we report on a comparative study of the rate-distortion performance of Motion-JPEG2000 and H.264/AVC using a representative set of video material. Our experimental coding results indicate that H.264/AVC performs surprisingly well on individually coded pictures in comparison to the highly sophisticated still image compression technology of JPEG2000. In addition to the rate-distortion analysis, we also provide a brief comparison of the evaluated coding algorithms in terms of complexity and functionality.
This work summarizes the findings of the 7th iteration of the Video Browser Showdown (VBS) competition organized as a workshop at the 24th International Conference on Multimedia Modeling in Bangkok. The competition focuses on video retrieval scenarios in which the searched scenes were either previously observed or described by another person (i.e., an example shot is not available). During the event, nine teams competed with their video retrieval tools in providing access to a shared video collection with 600 hours of video content. Evaluation objectives, rules, scoring, tasks, and all participating tools are described in the article. In addition, we provide some insights into how the different teams interacted with their video browsers, which was made possible by a novel interaction logging mechanism introduced for this iteration of the VBS. The results collected at the VBS evaluation server confirm that searching for one particular scene in the collection when given a limited time is still a challenging task for many of the approaches that were showcased during the event. Given only a short textual description, finding the correct scene is even harder. In ad hoc search with multiple relevant scenes, the tools were mostly able to find at least one scene, whereas recall was the issue for many teams. The logs also reveal that even though recent exciting advances in machine learning narrow the classical semantic gap problem, user-centric interfaces are still required to mediate access to specific content. Finally, open challenges and lessons learned are presented for future VBS events.
Interactive video retrieval tools developed over the past few years are emerging as powerful alternatives to automatic retrieval approaches by giving the user more control as well as more responsibilities. Current research tries to identify the best combinations of image, audio and text features that combined with innovative UI design maximize the tools (2017) 76:5539-5571 performance. We present the last installment of the Video Browser Showdown 2015 which was held in conjunction with the International Conference on MultiMedia Modeling 2015 (MMM 2015) and has the stated aim of pushing for a better integration of the user into the search process. The setup of the competition including the used dataset and the presented tasks as well as the participating tools will be introduced . The performance of those tools will be thoroughly presented and analyzed. Interesting highlights will be marked and some predictions regarding the research focus within the field for the near future will be made.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.