This paper presents a method, which is developed based on the Discrete Cosine (DC) coefficient and multivariate parametric statistical tests, such as tests for equality of mean vectors and the covariance matrices. Background scenes and forefront objects are separated from the key-frame, and the salient features, such as colour and Gabor texture, are extracted from the background and forefront components. The extracted features are formulated as a feature vector. The feature vector is compared to that of the feature vector database, based on the statistical tests. First, the feature vectors are compared with respect to covariance. If the feature vector of the key-frame and the feature vector of the feature vector database pass the test, then the test for equality of mean vector is performed; otherwise, the testing process is stopped. If the feature vectors pass both tests, then it is inferred that the query key-frame represents the target video in the video database. Otherwise, it is concluded that the query key-frame not representing the video; and the proposed system takes the next feature vector for matching. The proposed method results in an average retrieval rate of 97.232%, 96.540%, and 96.641% for CC_WEB, UCF101, and our newly constructed database, respectively. Further, the mAP scores computed for each video datasets, which resulted in 0.807, 0.812, and 0.814 for CC_WEB, UCF101, and our newly constructed database, respectively. The output results obtained by the proposed method are comparable to the existing methods.
With the improvement of mixed media information composes and accessible transfer speed there is immense interest of video retrieving frameworks, as clients move from content based recovery frameworks to content based retrieval frameworks. Determination of removed features assume an imperative job in substance based video retrieving paying little mind to video qualities being under thought. This work assists the up and coming analysts in the field of video retrieving with getting the thought regarding distinctive procedures and strategies accessible for the video recovery. These highlights are proposed for choosing, ordering and positioning as indicated by their potential enthusiasm to the client. Great feature determination likewise permits the time and space expenses of the recovery procedure to be lessened. This overview surveys the fascinating highlights that can be separated from video information for ordering and retrieving alongside likeness estimation techniques. We likewise recognize present research issues in territory of content based video retrieving frameworks.
In today's PC illustration, numerous object locations of videos are quite critical duties to accomplish. Swiftly and reliably recognising and distinguishing the multiple aspects of a video is a crucial attribute for collaborating with one's condition (object). The core issue is that in theory, to ensure that no significant aspect is missing; all aspects of a content in a video must be scanned for elements on various different scales. It requires some investment and effort anyway, to really arrange the substance of a given content region and both time and computational limits that an operator can spend on classification are constrained. Two presumption procedures for accelerating the standard identifier are performed by the proposed method and demonstrate their capability by performing both identification efficiency and velocity. The main enhancement of our group-based classifier focuses on accelerating the grouping of sub features by planning the problem as a selection procedure for consecutive features. The subsequent improvement gives better multiscale features to distinguish objects of all sizes without rescaling the information image from a video. Extracting contents from video is an assortment of successive images with a steady time interim. So video can give more data about contents in it when situations are changing regarding time. Along these lines, physically taking care of contents with features are very unimaginable. In the proposed work, it is suggested that a Group-based Video Content Extraction Classifier (GbCCE) extracts content from a video by extracting relevant features using a group-based classifier. The proposed method is distinct from conventional approaches and the findings indicate that better output is demonstrated by the proposed method.
For content-dependent video recovery, this paper proposes a multifaceted feature extraction approach based on color string extraction, local texture characteristics, and amount of Absolute Difference, which extracts the features such as color components , texture components, and motion behavior. The color string function is derived using the string duration count, and the color histogram approach is used for detecting scene shifts. Based on scene shift recognition, the key-frames are retrieved. Based on the color string and local texture functions, the color and texture characteristics are derived from the key-frame. The Amount of Absolute Difference is measured, and the motion function of a video is derived using it. To compare the function vectors of the question and main frames of the video to be recovered, the Jeffrey's Divergence measure is implemented. The suggested approach has the greatest capture of both spatial and temporal aspects, and the recovery efficiency is increased by utilizing multifaceted features. The system suggested outperforms the current approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.