This paper presents methods for video browsing and extracting information from video by measuring brightness data. Cut breaks are detected by measuring areas in which inter-frame difference occurs. We propose a video browsing tool (VBToo1) that creates a compressed video containing all important scenes. We utilize two factors related to pixel brightness to allow the automatic generation of the compressed video. . INTRODUCTIONThere have been many attempts at using video in computer and communication fields.2 However, there has been little improvement in the handling of video itself. Video cameras are ubiquitous but the information recorded on the tape is useless until someone looks at it and indicates the tapes's contents. Video information is difficult to handle because no machine can automatically "recognize" scenes with any degree of accuracy. Moreover, there are far more people with cameras producing video than the people capable of editing that video. We think it is necessary to convert video to make it easier to handle.The first stage of our study was an attempt to supply additional information for easy handling. This paper presents a method for extracting information relating to video content This information can then be used for video browsing. In section 2, we discuss the extraction method focusing on cut detection. In section 3, we propose a useful video browsing method as an example of how to utilize the information. The result of experiments are presented in section 4. EXTRACTION OF VIDEO CONTENT Extracting information from videoIt is useful to get information relative to video contents when you see, search, or edit videos. For example, knowing the location of cut breaks and converting video images into smaller units makes video easier to handle. For these reasons, we have to find characteristic values for cut breaks. Besides cut location, there is a lot of other useful information for handling. If the amount of motion and camera operation in a video are known, for instance contents can be recognized in a short time. The location of a slowmotion scene in an action movie is often indicates important or climactical scenes.To know this information, we use brightness data, because the brightness component is an essential value of images and it is well reflected in the video contents. We introduce two brightness values to extract different video content: frame-base histogram difference (FHD), and a pixel-base inter-frame difference histogram (IDH). (Fig. I) FHBrightness distribution is related to the image, and if images change. brightness distribution changes. We refer to the brightness histogram within a frame as FH(frame-base histogram). We can obtain information about moving images from the FH pattern. For example, at the beginning of cuts, FH changes discontinuously at the frame. (see Fig. 2 (a)) 980 / SPIE Vol. 1606 Visual Communications arid Image Processing '91: Image Processing o.9194ij7437/97/$4yJ Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/02/2015 Terms of Use: http://spied...
SUMMARYWe focus on the feature transform approach as one methodology for biometric template protection, where the template consists of the features extracted from the biometric trait. This study considers some properties of the unitary (including orthogonal) transform-based template protection in particular. It is known that the Euclidean distance between the templates protected by a unitary transform is the same as that between original (non-protected) ones as a property. In this study, moreover, it is shown that it provides the same results in l 2 -norm minimization problems as those of original templates. This means that there is no degradation of recognition performance in authentication systems using l 2 -norm minimization. Therefore, the protected templates can be reissued multiple times without original templates. In addition, a DFT-based template protection scheme is proposed as an unitary transform-based one. The proposed scheme enables to efficiently generate protected templates by the FFT, in addition to the useful properties. It is also applied to face recognition experiments to evaluate the effectiveness.
Abstract. This paper discusses a video cut detection method. Cut detection is an important technique for making videos easier to handle. First, this paper analyzes the distribution of the image difference V to clarify the characteristics that make V suitable for cut detection. We propose a cut detection method that uses a projection (an isolated sharp peak) detecting filter. A motion sensitive V is used to stabilize V projections at cuts, and cuts are detected more reliably with this filter. The method can achieve high detection rates without increasing the rate of misdetection. Experimental results confirm the effectiveness of the filter.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.