Machines have moved from supporting decision-making processes of humans to making decisions for humans. This shift has been accompanied by concerns regarding the impact of decisions made by algorithms on individuals and society. Unsurprisingly, the delegation of important decisions to machines has therefore triggered a debate on how to regulate the automated decision-making practices. In Europe, policymakers have attempted to address these concerns through a combination of individual rights and due processes established in data protection law, which relies on other statutes, e.g., anti-discrimination law and restricting trade secret laws, to achieve certain goals. This article adds to the literature by disentangling the challenges arising from automated decision-making systems and focusing on ones arising without malevolence but merely as unwanted side-effects of increased automation. Such side-effects include ones arising from the internal processes leading to a decision (e.g., opacity), the impacts of decisions (e.g., discrimination), as well as the responsibility for decisions and have consequences on an individual and societal level. Upon this basis the article discusses the redress mechanisms provided in data protection law. It shows that the approaches within data protection law complement one another, but do not fully remedy the identified side-effects. This is particularly true for sideeffects that lead to systemic societal shifts. To that end, new paradigms to guide future policymaking discourse are being explored.