The unprecedented scale at which personal data is used to train machine learning (ML) models is a motivation to examine the ways in which it can be erased when implementing the GDPR's 'right to be forgotten'. The existing literature investigating this right focus on a purely technical or legal approach, lacking the collaboration required for this interdisciplinary space. Recent works has identified there is no one solution to erasure in ML and this must therefore be decided on a case-by-case basis. However, there is an absence of guidance for controllers to follow when personal data must be erased in ML. In this paper we develop a novel, decision-making flow that encompasses the necessary considerations for a controller. Addressing, in particular, the interdisciplinary considerations relevant to the EU GDPR and data protection scholarship, as well as concepts from computer science and its application in industry. This results in several optimal solutions for the controller and data subject, differing with levels of erasure. To validate the proposed decision-making flow a real case study is discussed throughout the paper. The paper highlights the need for a clearer framework when personal data must be erased in ML; empowering the regulator, controller and data subject.