Effective integration of humans and automation in control systems engineering has been an ongoing effort since the original publication of McRuer’s descriptions of human operators in servomechanism systems in 1959. Over the past 60 years, increasing capabilities of automation and computer systems have resulted in changing considerations of function allocation and human-automation interaction since Fitts’ “Humans are better at / Machines are better at” descriptions of the early 1950s. The processes of distributed autonomy and dynamic function allocation in modern human-automation and human-robotic interactions benefit from increased computing capabilities, resulting in systems with potentially fluid (and sometimes conflicting) boundaries for human vs. automation control. Using examples from human and robotic spaceflight, robotics can demonstrate significant autonomy (automated “safe-moding” and restart by Mars rovers), and humans may have limited autonomy (when astronauts conducting extravehicular activity rely on and wait for ground controllers to create or modify procedures to complete required tasks). Proposed future advances in human-automation interaction and coordination include the development of “centaur” teams of humans interacting with sophisticated software and robotic agents as team members (rather than fixed allocations as human-controlled servos or automation-controlled autonomous systems). Approaches within the authors’ lab include qualitative research of process and cognitive task demands to create functional architecture for AI applications in cyber security. Another method uses agent-based modeling to incorporate individual thinking style and interpersonal interactions in task performance simulations, effectively creating more robust hybrid systems incorporating cognitive and social factors in complex settings.