Artificial intelligence (AI) systems that make predictions, translate text, generate text and images, and much more are ubiquitous in modern life. Such AI systems typically perform tasks that humans cannot perform with the same degree of speed and efficiency, so they are increasingly integrated into human decision-making. But they also tend to be black boxes, in the sense that even experts in machine learning do not understand how they arrive at their output from their given inputs. In this paper, we argue that to promote positive AI system-user interactions, such AI systems should be equipped with a user interface that provides reasons for the systems’ decisions. By ‘reasons,’ we mean linguistically articulated explanations of why a certain output was issued, in a similar sense to what we would expect humans to provide as explanations for their actions. Some preliminary survey data suggest that non-expert users generally highly value being able to have one’s ‘why-questions’ answered directly by the AI. We further propose that this feature will increase non-expert users’ trust in AI moving forward, which is desirable as AI takes up an increasingly larger role in our society.