The rise of artificial intelligence (AI) has produced advanced tools performing tasks traditionally done by humans, but this progress introduces significant legal, ethical, and security challenges. These challenges encompass concerns about individual privacy, data security, legal liability, biases in AI decision-making, and the necessity for international collaboration on regulatory frameworks that balance innovation with public protection. This research analyzes the roles and responsibilities of governments, companies, and individuals in safeguarding personal data from AI misuse. Employing normative juridical methods with statutory and analytical approaches, it focuses primarily on Indonesia’s Law Number 27 of 2022 on Personal Data Protection and Law Number 19 of 2016, which amends Law Number 11 of 2008 on Electronic Information and Transactions. While Indonesia’s Personal Data Protection (PDP) Law aims to prevent misuse by AI through comprehensive regulations and clear data protection measures, it does not specifically address challenges posed by AI technology. Given AI’s ability to process large-scale data rapidly, there’s an increased risk of personal data misuse if not closely monitored. Therefore, reform is needed to develop regulations that are more specific and adaptable to AI developments, including establishing specialized agencies to monitor and prosecute AI-related personal data misuse.