In the 1970s, some AI leaders predicted that we would soon see all manner of artificially intelligent entities in our daily lives. Unfortunately, in the interim, this has been true mostly in the realm of science fiction. Recently, however, pioneering researchers have been bringing together advances in many subfields of AI, such as robotics, computer vision, natural language and speech processing, and cognitive modeling, to create the first generation of robots and avatars that illustrate the true potential of combining these technologies. The purpose of this article is to highlight a few of these projects and to draw some conclusions from them for future research.We begin with a short discussion of scope and terminology. Our focus here is on how robots and avatars interact with humans, rather than with the environment. Obviously, this cannot be a sharp distinction, since humans form part of the environment for such entities. However, we are interested primarily in how new interaction capabilities enable robots and avatars to enter into new kinds of relationships with humans, such as hosts, advisors, companions, and jesters.We will not try to define robot here, but we do want to point out that our focus is on humanoid robots (although we stretch the category a bit to include a few animallike robots that illustrate the types of interaction we are interested in). Industrial automation robotics, while economically very important, and a continual source of advances in sensor and effector technology for humanoid robots, will continue to be more of a behind-the-scenes contributor to our everyday lives.The meaning of the term avatar is currently in flux. Its original and narrowest use is to refer to the graphical representation of a person (user) in a virtual reality system. Recently, however, the required con-