In computer science, there are efforts to make machine learning more interpretable or explainable, and thus to better understand the underlying models, algorithms, and their behavior. But what exactly is interpretability, and how can it be achieved? Such questions lead into philosophical waters because their answers depend on what explanation and understanding are-and thus on issues that have been central to the philosophy of science. In this paper, we review the recent philosophical literature on interpretability. We propose a systematization in terms of four tasks for philosophers: (i) clarify the notion of interpretability, (ii) explain the value of interpretability, (iii) provide frameworks to think about interpretability, and (iv) explore important features of it to adjust our expectations about it.