An algorithm refers to a series of stepwise instructions used by a machine to perform a mathematical operation. In 1955, the term artificial intelligence (AI) was coined to indicate that a machine could be programmed to duplicate human intelligence. Even though that goal has not yet been reached, the use of sophisticated machine learning algorithms has moved us closer to that goal. While algorithm‐enabled systems and devices will bring many benefits to occupational safety and health, this Commentary focuses on new sources of worker risk that algorithms present in the use of worker management systems, advanced sensor technologies, and robotic devices. A new “digital Taylorism” may erode worker autonomy, and lead to work intensification and psychosocial stress. The presence of large amounts of information on workers within algorithmic‐enabled systems presents security and privacy risks. Reliance on indiscriminate data mining may reproduce forms of discrimination and lead to inequalities in hiring, retention, and termination. Workers interfacing with robots may face work intensification and job displacement, while injury in the course of employment by a robotic device is also possible. Algorithm governance strategies are discussed such as risk management practices, national and international laws and regulations, and emerging legal accountability proposals. Determining if an algorithm is safe for workplace use is rapidly becoming a challenge for manufacturers, programmers, employers, workers, and occupational safety and health practitioners. To achieve the benefits that algorithm‐enabled systems and devices promise in the future of work, now is the time to study how to effectively manage their risks.