Digital and networking technologies are increasingly used to predict who is at risk of attempting suicide. Such digitalized suicide prevention within and beyond mental health care raises ethical, social and legal issues for a range of actors involved. Here, I will draw on key literature to explore what issues (might) arise in relation to digitalized suicide prevention practices. I will start by reviewing some of the initiatives that are already implemented, and address some of the issues associated with these and with potential future initiatives. Rather than addressing the breadth of issues, however, I will then zoom in on two key issues: first, the duty of care and the duty to report, and how these two legal and professional standards may change within and through digitalized suicide prevention; and secondly a more philosophical exploration of how digitalized suicide prevention may alter human subjectivity. To end with the by now famous adagio, digitalized suicide prevention is neither good nor bad, nor is it neutral, and I will argue that we need sustained academic and social conversation about who can and should be involved in digitalized suicide prevention practices and, indeed, in what ways it can and should (not) happen.