Practitioners increasingly use machine learning (ML) models, yet they have become more complex and harder to understand. To address this issue, researchers have proposed techniques to explain model predictions. However, practitioners struggle to use explainability methods because they do not know which to choose and how to interpret the results. We address these challenges by introducing TalkToModel: an interactive dialogue system that enables users to explain ML models through natural language conversations. TalkToModel comprises three components: 1) an adaptive dialogue engine that interprets natural language and generates meaningful responses, 2) an execution component, which constructs the explanations used in the conversation, 3) a conversational interface. In real-world evaluations, 73% of healthcare workers agreed they would use TalkToModel over existing systems for understanding a disease prediction model, and 85% of ML professionals agreed TalkToModel was easier to use, demonstrating that TalkToModel is highly effective for model explainability.