Over the past years, there has been an increasing concern regarding the risk of bias and discrimination in algorithmic systems, which received significant attention amongst the research communities. To ensure the system's fairness, various methods and techniques have been developed to assess and mitigate potential biases. Such methods, also known as "Formal Fairness", look at various aspects of the system's advanced reasoning mechanism and outcomes, with techniques ranging from local explanations (at feature level) to visual explanations (saliency maps). Another aspect, equally important, represents the perception of the users regarding the system's fairness. Despite a decision system being provably "Fair", if the users find it difficult to understand how the decisions were made, they will refrain from trusting, accepting, and ultimately using the system altogether. This raised the issue of "Perceived Fairness" which looks at means to reassure users of a system's trustworthiness. In that sense, providing users with some form of explanation on why and how certain outcomes resulted, is highly relevant, especially nowadays as the reasoning mechanisms increase in complexity and computational power. Recent studies suggest a plethora of explanation types. The current work aims to review the recent progress in explaining systems' reasoning and outcome, categorize and present it as a reference for the state-of-the-art fairness-related explanations review. CCS CONCEPTS• Human-centered computing → Human computer interaction (HCI); • Computing methodologies → Artificial intelligence.
Robust learning in expressive languages with real-world data continues to be a challenging task. Numerous conventional methods appeal to heuristics without any assurances of robustness. While probably approximately correct (PAC) Semantics offers strong guarantees, learning explicit representations is not tractable, even in propositional logic. However, recent work on so-called “implicit" learning has shown tremendous promise in terms of obtaining polynomial-time results for fragments of first-order logic. In this work, we extend implicit learning in PAC-Semantics to handle noisy data in the form of intervals and threshold uncertainty in the language of linear arithmetic. We prove that our extended framework keeps the existing polynomial-time complexity guarantees. Furthermore, we provide the first empirical investigation of this hitherto purely theoretical framework. Using benchmark problems, we show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.