Recent advancements in Educational AI have focused on models for automatic question generation. Yet, these advancements face challenges: (1) their "black-box" nature limits transparency, thereby obscuring the decision-making process; and (2) their novelty sometimes causes inaccuracies due to limited feedback systems. Explainable AI (XAI) aims to address the first limitation by clarifying model decisions, while Interactive Machine Learning (IML) emphasises user feedback and model refinement. However, both XAI and IML solutions primarily serve AI experts, often neglecting novices like teachers. Such oversights lead to issues like misaligned expectations and reduced trust. Following the user-centred design method, we collaborated with teachers and ed-tech experts to develop an AI-aided system for generating multiple-choice question distractors, which incorporates feedback, control, and visual explanations.Evaluating these through semi-structured interviews with 12 teachers, we found a strong preference for the feedback feature, enabling teacher-guided AI improvements. Control and explanations' usefulness was largely dependent on model performance: they were valued when the model performed well. If the model did not perform well, teachers sought context over AI-centric explanations, suggesting a tilt towards data-centric explanations. Based on these results, we propose guidelines for creating tools that enable teachers to steer and interact with question-generating AI models.CCS Concepts: • Human-centered computing → Empirical studies in HCI; User studies; Empirical studies in visualization;Visualization design and evaluation methods.