Learning to Rank (L2R) is one of the main research lines in Information Retrieval.
Risk-sensitive L2R
is a sub-area of L2R that tries to learn models that are good on average while at the same time reducing the risk of performing poorly in a few but important queries (e.g., medical or legal queries). One way of reducing risk in learned models is by selecting and removing noisy, redundant features, or features that promote some queries to the detriment of others. This is exacerbated by learning methods that usually maximize an average metric (e.g., mean average precision (MAP) or Normalized Discounted Cumulative Gain (NDCG)). However, historically,
feature selection
(FS) methods have focused only on effectiveness and feature reduction as the main objectives. Accordingly, in this work, we propose to evaluate FS for L2R with an additional objective in mind, namely
risk-sensitiveness
. We present novel single and multi-objective criteria to optimize feature reduction, effectiveness, and risk-sensitiveness, all at the same time. We also introduce a new methodology to explore the search space, suggesting effective and efficient extensions of a well-known Evolutionary Algorithm (SPEA2) for FS applied to L2R. Our experiments show that explicitly including risk as an objective criterion is crucial to achieving a more effective and risk-sensitive performance. We also provide a thorough analysis of our methodology and experimental results.
The main focus of an Intelligent environment, as with other applications of Artificial Intelligence, is generally on the provision of good decisions towards the management of the environment or the support of human decision-making processes. The quality of the system is often measured in terms of accuracy or other performance metrics, calculated on labeled data. Other equally important aspects are usually disregarded, such as the ability to produce an intelligible explanation for the user of the environment. That is, asides from proposing an action, prediction, or decision, the system should also propose an explanation that would allow the user to understand the rationale behind the output. This is becoming increasingly important in a time in which algorithms gain increasing importance in our lives and start to take decisions that significantly impact them. So much so that the EU recently regulated on the issue of a "right to explanation". In this paper we propose a Humancentric intelligent environment that takes into consideration the domain of the problem and the mental model of the Human expert, to provide intelligible explanations that can improve the efficiency and quality of the decision-making processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.