Abstract:In the early 1990s, computer scientists became motivated by the idea of rendering human-computer interactions more humanlike and natural for their users in order to both address complaints that technologies impose a mechanical (sometimes even anti-social) aesthetic to their everyday environment, and also investigate innovative ways to manage system-environment complexity. With the recent development of the field of Social Robotics and particularly HumanRobot Interaction, the integration of intentional emotional mechanisms in a system's control architecture becomes inevitable. Unfortunately, this presents significant issues that must be addressed for a successful emotional artificial system to be developed. This paper provides an additional dimension to documented arguments for and against the introduction of emotions into artificial systems by highlighting some fundamental paradoxes, mistakes, and proposes guidelines for how to develop successful affective intelligent social machines.