Explainability and interpretability are two critical aspects of decision support systems. Within computer vision, they are critical in certain tasks related to human behavior analysis such as in health care applications. Despite their importance, it is only recently that researchers are starting to explore these aspects. This paper provides an introduction to explainability and interpretability in the context of computer vision with an emphasis on looking at people tasks. Specifically, we review and study those mechanisms in the context of first impressions analysis. To the best of our knowledge, this is the first effort in this direction. Additionally, we describe a challenge we organized on explainability in first impressions analysis from video. We analyze in detail the newly introduced data set, evaluation protocol, proposed solutions and summarize the results of the challenge. Finally, derived from our study, we outline research opportunities that we foresee will be decisive in the near future for the development of the explainable computer vision field.Keywords Explainable computer vision · First impressions · Personality analysis · Multimodal information · Algorithmic accountability 1 IntroductionLooking at People (LaP) -the field of research focused on the visual analysis of human behavior -has been a very active research field within computer vision in the last decade [28,29,62]. Initially, LaP focused on tasks associated with basic human behaviors that were obviously visual (e.g., basic gesture recognition [71,70] or face recognition in restricted scenarios [10,83]). Research progress in LaP has now led to models that can solve those initial tasks relatively easily [66,82]. Instead, attention on human behavior analysis has now turned to problems that are not visually evident to model / recognize [84,48,72]. For instance, consider the task of assessing personality traits from visual information [72]. Although there are methods that can estimate apparent personality traits with (relatively) acceptable performance, model recommendations by themselves are useless if the end user is not confident on the model's reasoning, as the primary use for such estimation is to understand bias in human assessors.Explainability and interpretability are thus critical features of decision support systems in some LaP tasks [26]. The former focuses on mechanisms that can tell what is the rationale behind the decision or recommendation made by