We propose a general framework to incorporate first-order logic (FOL) clauses, that are thought of as an abstract and partial representation of the environment, into kernel machines that learn within a semi-supervised scheme. We rely on a multi-task learning scheme where each task is associated with a unary predicate defined on the feature space, while higher level abstract representations consist of FOL clauses made of those predicates. We re-use the kernel machine mathematical apparatus to solve the problem as primal optimization of a function composed of the loss on the supervised examples, the regularization term, and a penalty term deriving from forcing real-valued constraints deriving from the predicates. Unlike for classic kernel machines, however, depending on the logic clauses, the overall function to be optimized is not convex anymore. An important contribution is to show that while tackling the optimization by classic numerical schemes is likely to be hopeless, a stage-based learning scheme, in which we start learning the supervised examples until convergence is reached, and then continue by forcing the logic clauses is a viable direction to attack the problem. Some promising experimental results are given on artificial learning tasks and on the automatic tagging of bibtex entries to emphasize the comparison with plain kernel machines.
Relevance ranking consists in sorting a set of objects with respect to a given criterion. However, in personalized retrieval systems, the relevance criteria may usually vary among different users and may not be predefined. In this case, ranking algorithms that adapt their behavior from users' feedbacks must be devised. Two main approaches are proposed in the literature for learning to rank: the use of a scoring function, learned by examples, that evaluates a feature-based representation of each object yielding an absolute relevance score, a pairwise approach, where a preference function is learned to determine the object that has to be ranked first in a given pair. In this paper, we present a preference learning method for learning to rank. A neural network, the comparative neural network (CmpNN), is trained from examples to approximate the comparison function for a pair of objects. The CmpNN adopts a particular architecture designed to implement the symmetries naturally present in a preference function. The learned preference function can be embedded as the comparator into a classical sorting algorithm to provide a global ranking of a set of objects. To improve the ranking performances, an active-learning procedure is devised, that aims at selecting the most informative patterns in the training set. The proposed algorithm is evaluated on the LETOR dataset showing promising performances in comparison with other state-of-the-art algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.