Fairness is an increasingly important concern as machine learning models are used to support decision making in high-stakes applications such as mortgage lending, hiring, and prison sentencing. This paper introduces a new open source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license (https://github.com/ibm/aif360). The main objectives of this toolkit are to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms.The package includes a comprehensive set of fairness metrics for datasets and models, explanations for these metrics, and algorithms to mitigate bias in datasets and models. It also includes an interactive Web experience (https://aif360.mybluemix.net) that provides a gentle introduction to the concepts and capabilities for line-of-business users, as well as extensive documentation, usage guidance, and industry-specific tutorials to enable data scientists and practitioners to incorporate the most appropriate tool for their problem into their work products. The architecture of the package has been engineered to conform to a standard paradigm used in data science, thereby further improving usability for practitioners. Such architectural design and abstractions enable researchers and developers to extend the toolkit with their new algorithms and improvements, and to use it for performance benchmarking. A built-in testing infrastructure maintains code quality.
Accuracy is an important concern for suppliers of artificial intelligence (AI) services, but considerations beyond accuracy, such as safety (which includes fairness and explainability), security, and provenance, are also critical elements to engender consumers' trust in a service. Many industries use transparent, standardized, but often not legally required documents called supplier's declarations of conformity (SDoCs) to describe the lineage of a product along with the safety and performance testing it has undergone. SDoCs may be considered multi-dimensional fact sheets that capture and quantify various aspects of the product and its development to make it worthy of consumers' trust. Inspired by this practice, we propose FactSheets to help increase trust in AI services. We envision such documents to contain purpose, performance, safety, security, and provenance information to be completed by AI service providers for examination by consumers. We suggest a comprehensive set of declaration items tailored to AI and provide examples for two fictitious AI services in the appendix of the paper. * A. Olteanu's work was done while at IBM Research. Author is currently affiliated with Microsoft Research.
The de novo design of antimicrobial therapeutics involves the exploration of a vast chemical repertoire to find compounds with broad-spectrum potency and low toxicity. Here, we report an efficient computational method for the generation of antimicrobials with desired attributes. The method leverages guidance from classifiers trained on an informative latent space of molecules modelled using a deep generative autoencoder, and screens the generated molecules using deep-learning classifiers as well as physicochemical features derived from high-throughput molecular dynamics simulations. Within 48 days, we identified, synthesized and experimentally tested 20 candidate antimicrobial peptides, of which two displayed high potency against diverse Gram-positive and Gram-negative pathogens (including multidrug-resistant Klebsiella pneumoniae) and a low propensity to induce drug resistance in Escherichia coli. Both peptides have low toxicity, as validated in vitro and in mice. We also show using live-cell confocal imaging that the bactericidal mode of action of the peptides involves the formation of membrane pores. The combination of deep learning and molecular dynamics may accelerate the discovery of potent and selective broad-spectrum antimicrobials.
The extraction of high-level color descriptors is an increasingly important problem, as these descriptions often provide links to image content. When combined with image segmentation, color naming can be used to select objects by color, describe the appearance of the image, and generate semantic annotations. This paper presents a computational model for color categorization and naming and extraction of color composition. In this paper, we start from the National Bureau of Standards' recommendation for color names, and through subjective experiments, we develop our color vocabulary and syntax. To assign a color name from the vocabulary to an arbitrary input color, we then design a perceptually based color-naming metric. The proposed algorithm follows relevant neurophysiological findings and studies on human color categorization. Finally, we extend the algorithm and develop a scheme for extracting the color composition of a complex image. According to our results, the proposed method identifies known color regions in different color spaces accurately, the color names assigned to randomly selected colors agree with human judgments, and the description of the color composition of complex scenes is consistent with human observations.
Color descriptors are among the most important features used in image analysis and retrieval. Due to its compact representation and low complexity, direct histogram comparison is a commonly used technique for measuring the color similarity. However, it has many serious drawbacks, including a high degree of dependency on color codebook design, sensitivity to quantization boundaries, and inefficiency in representing images with few dominant colors. In this paper, we present a new algorithm for color matching that models behavior of the human visual system in capturing color appearance of an image. We first develop a new method for color codebook design in the Lab space. The method is well suited for creating small fixed color codebooks; for image analysis, matching, and retrieval. Then we introduce a statistical technique to extract perceptually relevant colors. We also propose a new color distance measure that is based on the optimal mapping between two sets of color components representing two images. Experiments comparing the new algorithm to some existing techniques show that these novel elements lead to better match to human perception in judging image similarity in terms of color composition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.