“…A crucial issue inherent in the weighted combination technique is related to the calculation of the ideal weight of each base classifier [21]. In this technique, the weight is generally determined based on the proportion to the classification accuracies of base classifiers on training data [22]. The formulation employed to decide the weights in classifier fusion for the purposes of this study is given ahead ∑ ( )…”
“…A crucial issue inherent in the weighted combination technique is related to the calculation of the ideal weight of each base classifier [21]. In this technique, the weight is generally determined based on the proportion to the classification accuracies of base classifiers on training data [22]. The formulation employed to decide the weights in classifier fusion for the purposes of this study is given ahead ∑ ( )…”
“…To demonstrate that the better performance of V-SVM is not solely from the assembling of multiple classifiers, two other ensembles of classifiers were used to compare with V-SVM. One is the Bagging classifier [22], where a set of SVM classifiers are trained independently, with each trained based on a randomly chosen subset (here 80%) of training images.…”
Section: Effect Of Video-specific Classifier Trainingmentioning
confidence: 99%
“…The majority voting from all the individual SVM classifiers are used to predict the class of any new image [22]. We call this classifer 'bagging-SVM'.…”
Section: Effect Of Video-specific Classifier Trainingmentioning
Abstract. We propose a novel classification framework called the videospecific SVM (V-SVM) for normal-vs-abnormal white-light colonoscopy image classification. V-SVM is an ensemble of linear SVMs, with each trained to separate the abnormal images in a particular video from all the normal images in all the videos. Since V-SVM is designed to capture lesion-specific properties as well as intra-class variations it is expected to perform better than SVM. Experiments on a colonoscopy image dataset with about 10, 000 images show that V-SVM significantly improves the performance over SVM and other baseline classifiers.
“…Indeed, benefiting from bootstrapping and aggregation, bagging [2] lowers both the variance and the bias component of the error. Tailored to SVM [12] it has been shown that notably in the case of multi-class classification SVM ensembles outperform a single SVM in terms of accuracy [18]. …”
How-to train effective classifiers on huge amount of multimedia data is clearly a major challenge that is attracting more and more research works across several communities. Less efforts however are spent on the counterpart scalability issue: how to apply big trained models efficiently on huge non annotated media collections ? In this paper, we address the problem of speeding-up the prediction phase of linear Support Vector Machines via Locality Sensitive Hashing. We propose building efficient hashbased classifiers that are applied in a first stage in order to approximate the exact results and filter the hypothesis space. Experiments performed with millions of one-against-one classifiers show that the proposed hash-based classifier can be more than two orders of magnitude faster than the exact classifier with minor losses in quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.