Networks provide useful tools for analysing diverse complex systems from natural, social and technological domains. Growing size and variety of data such as more nodes and links and associated weights, directions and signs can provide accessory information. Link and weight abundance, on the other hand, results in denser networks with noisy, insignificant or otherwise redundant data. Moreover, typical network analysis and visualization techniques presuppose sparsity and are not appropriate or scalable for dense and weighted networks. As a remedy, network backbone extraction methods aim to retain only the important links while preserving the useful and elucidative structure of the original networks for further analyses. Here, we provide the first methods for extracting signed network backbones from intrinsically dense unsigned unipartite weighted networks. Utilizing a null model based on statistical techniques, the proposed significance filter and vigor filter allow inferring edge signs. Empirical analysis on migration, voting, temporal interaction and species similarity networks reveals that the proposed filters extract meaningful and sparse signed backbones while preserving the multiscale nature of the network. The resulting backbones exhibit characteristics typically associated with signed networks such as reciprocity, structural balance and community structure. The developed tool is provided as a free, open-source software package.
Fig. 1. Decision-Aiding Systems Meet System Accountability Benchmark to Generate System Cards. (The leftmost image: Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0)Decisions in public policy are increasingly being made or assisted by automated decision-making algorithms. Many of these algorithms process personal data for tasks such as predicting recidivism, assisting welfare decisions, identifying individuals using face recognition, and more. While potentially improving efficiency and effectiveness, such algorithms are not inherently free from issues such as bias, opaqueness, lack of explainability, maleficence, and the like. Given that the outcomes of these algorithms have significant impacts on individuals and society and are open to analysis and contestation after deployment, such issues must be accounted for before deployment. Formal audits are a way towards ensuring algorithms that are used in public policy meet the appropriate accountability standards. This work, based on an extensive analysis of the literature, proposes a unifying framework for system accountability benchmark for formal audits of artificial intelligence-based decision-aiding systems in public policy as well as system cards that serve as scorecards presenting the outcomes of such audits. The benchmark consists of 50 criteria organized within a four by four matrix consisting of the dimensions of (i) data, (ii) model, (iii) code, (iv) system and (a) development, (b) assessment, (c) mitigation, (d) assurance. Each criterion is described and discussed alongside a suggested measurement scale indicating whether the evaluations are to be performed by humans or computers and whether the evaluation outcomes are binary or on an ordinal scale. The proposed system accountability benchmark reflects the state-of-the-art developments for accountable systems, serves as a checklist for future algorithm audits, and paves the way for sequential work as future research.
As organizations’ investments on information systems/information technology (IS/IT) increase, the assessment methods used during IS/IT investment decision-making process holds more and more importance. Since successful IS/IT projects are key to the sustainability of an organization, identifying the factors which have effects on project success carries useful insights. In this study, 18 assessment methods are identified based on the literature. A novel classification method is proposed and assessment methods are classified into financial, strategic, and organizational categories. A novel rule-based method for determining the size of IS/IT projects is also proposed. Detailed information on project characteristics, employed IS/IT assessment methods, and project success is collected for 110 real-world IS/IT projects. The collected data is utilized in ANOVA and Regression tests to examine the factors which affect project success. Use of organization-related assessment methods, which is proposed in this study, is found to increase the success rate of the projects. Obligation towards the project and use of multi-criteria methodology have significant relationships with project success whereas project size, use of gut feeling during evaluation, and employed system developmentmethodology do not have statistically significant impacts on project success.
As artificial intelligence plays an increasingly substantial role in decisions affecting humans and society, the accountability of automated decision systems has been receiving increasing attention from researchers and practitioners. Fairness, which is concerned with eliminating unjust treatment and discrimination against individuals or sensitive groups, is a critical aspect of accountability. Yet, for evaluating fairness, there is a plethora of fairness metrics in the literature that employ different perspectives and assumptions that are often incompatible. This work focuses on group fairness. Most group fairness metrics desire a parity between selected statistics computed from confusion matrices belonging to different sensitive groups. Generalizing this intuition, this paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness. To further analyze the source of potential unfairness, an appropriate post hoc analysis methodology is also presented. The usefulness of the test, metric, and post hoc analysis is demonstrated via a case study on the controversial case of COMPAS, an automated decision system employed in the US to assist judges with assessing recidivism risks. Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment, such as those based on the system accountability benchmark.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.