The aggregation of preferences (expressed in the form of rankings) from multiple experts is a well-studied topic in a number of fields. The Kemeny ranking problem aims at computing an aggregated ranking having minimal distance to the global consensus. However, it assumes that these rankings will be complete, i.e., all elements are explicitly ranked by the expert. This assumption may not simply hold when, for instance, an expert ranks only the top-K items of interest, thus creating a partial ranking. In this paper we formalize the weighted Kemeny ranking problem for partial rankings, an extension of the Kemeny ranking problem that is able to aggregate partial rankings from multiple experts when only a limited number of relevant elements are explicitly ranked (top-K), and this number may vary from one expert to another (top-K i). Moreover, we introduce two strategies to quantify the weight of each partial ranking. We cast this problem within the realm of combinatorial optimization and lean on the successful Ant Colony Optimization (ACO) metaheuristic algorithm to arrive at high-quality solutions. The proposed approach is evaluated through a real-world scenario and 190 synthetic datasets from www.PrefLib.org. The experimental evidence indicates that the proposed ACO-based solution is capable of significantly
A recent trend in Machine Learning is to augment the transparency of traditional classification models using Granular Computing techniques. This approach has been found particularly useful in the neural networks field since most successful neural systems often require complex structures to behave like universal approximators. However, there is a widelyheld view stating that, to build an interpretable classifier, one might have to sacrifice some prediction accuracy. We want to challenge this belief by exploring the performance of a recently introduced granular classifier termed Fuzzy-Rough Cognitive Networks against low-level (i.e., traditional) neural networks. The simulation results have shown that this neural system can attain quite competitive prediction rates while featuring a shallow, granular architecture. As a bigger picture, this study paves the way for conducting a more thorough evaluation of granular versus low-level neural classifiers in the near future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright 漏 2024 scite LLC. All rights reserved.
Made with 馃挋 for researchers
Part of the Research Solutions Family.