This paper is concerned with algorithms for prediction of discrete sequences over a finite alphabet, using variable order Markov models. The class of such algorithms is large and in principle includes any lossless compression algorithm. We focus on six prominent prediction algorithms, including Context Tree Weighting (CTW), Prediction by Partial Match (PPM) and Probabilistic Suffix Trees (PSTs). We discuss the properties of these algorithms and compare their performance using real life sequences from three domains: proteins, English text and music pieces. The comparison is made with respect to prediction quality as measured by the average log-loss. We also compare classification algorithms based on these predictors with respect to a number of large protein classification tasks. Our results indicate that a "decomposed" CTW (a variant of the CTW algorithm) and PPM outperform all other algorithms in sequence prediction tasks. Somewhat surprisingly, a different algorithm, which is a modification of the Lempel-Ziv compression algorithm, significantly outperforms all algorithms on the protein classification problems.
Clustering is considered a non-supervised learning setting, in which the goal is to partition a collection of data points into disjoint clusters. Often a bound k on the number of clusters is given or assumed by the practitioner. Many versions of this problem have been defined, most notably k-means and k-median.An underlying problem with the unsupervised nature of clustering it that of determining a similarity function. One approach for alleviating this difficulty is known as clustering with side information, alternatively, semi-supervised clustering. Here, the practitioner incorporates side information in the form of "must be clustered" or "must be separated" labels for data point pairs. Each such piece of information comes at a "query cost" (often involving human response solicitation). The collection of labels is then incorporated in the usual clustering algorithm as either strict or as soft constraints, possibly adding a pairwise constraint penalty function to the chosen clustering objective.Our work is mostly related to clustering with side information. We ask how to choose the pairs of data points. Our analysis gives rise to a method provably better than simply choosing them uniformly at random. Roughly speaking, we show that the distribution must be biased so as more weight is placed on pairs incident to elements in smaller clusters in some optimal solution. Of course we do not know the optimal solution, hence we don't know the bias. Using the recently introduced method of ε-smooth relative regret approximations of Ailon, Begleiter and Ezra, we can show an iterative process that improves both the clustering and the bias in tandem. The process provably converges to the optimal solution faster (in terms of query cost) than an algorithm selecting pairs uniformly.
The disagreement coefficient of Hanneke has become a central data independent invariant in proving active learning rates. It has been shown in various ways that a concept class with low complexity together with a bound on the disagreement coefficient at an optimal solution allows active learning rates that are superior to passive learning ones.We present a different tool for pool based active learning which follows from the existence of a certain uniform version of low disagreement coefficient, but is not equivalent to it. In fact, we present two fundamental active learning problems of significant interest for which our approach allows nontrivial active learning bounds. However, any general purpose method relying on the disagreement coefficient bounds only fails to guarantee any useful bounds for these problems. The applications of interest are: learning to rank from pairwise preferences, and clustering with side information (a.k.a. semi-supervised clustering).The tool we use is based on the learner's ability to compute an estimator of the difference between the loss of any hypotheses and some fixed "pivotal" hypothesis to within an absolute error of at most ε times the disagreement measure (ℓ 1 distance) between the two hypotheses. We prove that such an estimator implies the existence of a learning algorithm which, at each iteration, reduces its in-class excess risk to within a constant factor. Each iteration replaces the current pivotal hypothesis with the minimizer of the estimated loss difference function with respect to the previous pivotal hypothesis. The label complexity essentially becomes that of computing this estimator.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.