In recent years, research has increasingly focused on developing intelligent tutoring systems that provide data-driven support for students in need of assistance during programming assignments. One goal of such intelligent tutors is to provide students with quality interventions comparable to those human tutors would give. While most studies focused on generating different forms of on-demand support, such as next-step hints and worked examples, at any given moment during the programming assignment, there is a lack of research on why human tutors would provide different forms of proactive interventions to students in different situations. This information is critical to know to allow the intelligent programming environments to select the appropriate type of student support at the right moment.In this work, we studied human tutors' reasons for providing interventions during two introductory programming assignments in a block-based environment. Three human tutors evaluated a sample of 86 struggling moments identified from students' log data using a data-driven model. The human tutors specified whether and why an intervention was needed (or not) for each struggling moment. We analyzed the expert tags and their consensus discussions and extracted three main reasons that made the experts decide to intervene: "missing key components to make progress", "using wrong or unnecessary blocks", "misusing needed blocks", "having critical logic errors", "needing confirmation and next steps", and "unclear student intention". We use six case studies to illustrate specific student code trace examples and the tutors' reasons for intervention. We also discuss the potential types of automatic interventions that could address these cases. Our work sheds light on when and why students might need programming interventions. These insights contribute towards improving the quality of automated, data-driven support in programming learning environments.