Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.
Game theory has been useful for understanding risk-taking and cooperative behavior. However, in studies of the neural basis of decision-making during games of conflict, subjects typically play against opponents with predetermined strategies. The present study introduces a neurobiologically plausible model of action selection and neuromodulation, which adapts to its opponent's strategy and environmental conditions. The model is based on the assumption that dopaminergic and serotonergic systems track expected rewards and costs, respectively. The model controlled both simulated and robotic agents playing Hawk-Dove and Chicken games against subjects. When playing against an aggressive version of the model, there was a significant shift in the subjects' strategy from Win-Stay-Lose-Shift to Tit-For-Tat. Subjects became retaliatory when confronted with agents that tended towards risky behavior. These results highlight the important interactions between subjects and agents utilizing adaptive behavior. Moreover, they reveal neuromodulatory mechanisms that give rise to cooperative and competitive behaviors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.