FairCORELS is an open-source Python module for building fair rule lists. It is a multi-objective variant of CORELS, a branch-and-bound algorithm to learn certifiably optimal rule lists. FairCORELS supports six statistical fairness metrics, proposes several exploration parameters and leverages on the fairness constraints to prune the search space efficiently. It can easily generate sets of accuracyfairness trade-offs. The models learnt are interpretable by design and a sparsity parameter can be used to control their length. CCS CONCEPTS• Software and its engineering → Software libraries and repositories; • Computing methodologies → Rule learning; Supervised learning; Machine learning algorithms.
A hybrid model involves the cooperation of an interpretable model and a complex black box. At inference, any input of the hybrid model is assigned to either its interpretable or complex component based on a gating mechanism. The advantages of such models over classical ones are two-fold: 1) They grant users precise control over the level of transparency of the system and 2) They can potentially perform better than a standalone black box since redirecting some of the inputs to an interpretable model implicitly acts as regularization.Still, despite their high potential, hybrid models remain under-studied in the interpretability/explainability literature. In this paper, we remedy this fact by presenting a thorough investigation of such models from three perspectives: Theory, Taxonomy, and Methods. First, we explore the theory behind the generalization of hybrid models from the Probably-Approximately-Correct (PAC) perspective. A consequence of our PAC guarantee is the existence of a sweet spot for the optimal transparency of the system. When such a sweet spot is attained, a hybrid model can potentially perform better than a standalone black box. Secondly, we provide a general taxonomy for the different ways of training hybrid models: the Post-Black-Box and Pre-Black-Box paradigms. These approaches differ in the order in which the interpretable and complex components are trained. We show where the state-of-the-art hybrid models Hybrid-Rule-Set and Companion-Rule-List fall in this taxonomy. Thirdly, we implement the two paradigms in a single method: HybridCORELS, which extends the CORELS algorithm to hybrid modeling. By leveraging CORELS, Hy-bridCORELS provides a certificate of optimality of its interpretable component and precise control over transparency. We finally show empirically that HybridCORELS is competitive with existing hybrid models, and performs just as well as a standalone black box (or even better) while being partly transparent.
Fairness and interpretability are fundamental requirements for the development of responsible machine learning. However, learning optimal interpretable models under fairness constraints has been identified as a major challenge. In this paper, we investigate and improve on a state-of-the-art exact learning algorithm, called CORELS, which learns rule lists that are certifiably optimal in terms of accuracy and sparsity. Statistical fairness metrics have been integrated incrementally into CORELS in the literature. This paper demonstrates the limitations of such an approach for exploring the search space efficiently before proposing an Integer Linear Programming method, leveraging accuracy, sparsity and fairness jointly for better pruning. Our thorough experiments show clear benefits of our approach regarding the exploration of the search space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.