Digital technologies in the workplace have undergone a remarkable evolution in recent years. Biosensors and wearables that enable data collection and analysis, through artificial intelligence (AI) systems are becoming widespread in the working environment, whether private or public. These systems face strong critics in the media and academia, emphasizing the algorithmic management trend. However, they can also be deployed for the common good such as occupational safety and health (OSH). In this sense, they can be promoted by public authorities in a public policy perspective of OSH, and they can also be used by public employers as a tool to improve the health of workers. Nevertheless, we argue that AI systems for OSH do not exclude thorny problems, considering the sensitive data collected, potential chilling effects, and employment discrimination. Based on three realistic scenarios, we identify a series of ethical concerns raised by the use of such AI systems and elaborate on the legal responses to these issues based on existing European law. With this analysis, we highlight blind spots, that is, situations in which existing laws do not provide clear or satisfying answers to relevant ethical concerns. We conclude that other avenues should be investigated to help the public sector determine whether it is legally and socially acceptable to deploy AI systems and achieve their public policy.