In 1997 Ronald T. Azuma introduced the following definition of Augmented Reality (AR): “Some researchers define AR in a way that requires the use of Head-Mounted Displays (HMDs). To avoid limiting AR to specific technologies, this survey defines AR as systems that have the following three characteristics: 1) Combines real and virtual, 2) Interactive in real-time, 3) Registered in 3-D.” Azuma also mentions that “AR might apply to all senses, not just sight.” [1] This definition has been leading in AR research until now. AR researchers focused on the various ways technology, in particular digital technology (computer-generated imagery, computer vision and world modelling, interaction technology, and AR display technology), could be developed to realize this AR view. The emphasis has been on addressing the sight sense when generating and aligning virtual content, our most dominant sense, although we can not survive without the others. Azuma and others mention the other senses and assume that this definition also covers other than computer-generated imagery, per-haps even other than computer generated and (spatial-temporal) generated and con-trolled virtual content. Nevertheless, the definition has some constituents that can be given various interpretations. This makes it workable, but it is useful to discuss how we should distinguish between real and virtual content, what is it that distinguishes real from virtual, or how virtual content can trigger changes in the real world (and the other way around), take into account that AR becomes part of ubiquitous computing. That is, rather than looking at AR from the point of view of particular professional, educational, or entertaining applications, we should look at AR from the point of view that it is ever-present, and embedded in ubiquitous computing (Ubicomp), and having its AR devices’ sensors and actuators communicate with the smart environments in which it is embedded.The focus in this paper is on ‘optical see-through’ (OSR) AR and ever-present AR. Ever-present AR will become possible with non-obtrusive AR glasses [2] or contact lenses [3,4]. Usually, interaction is looked upon from the point of view of what we see and hear. But we certainly are aware of touch experiences and exploring objects with active touch. We can also experience scents and flavors, passively but also actively, that is, consciously explore scents or tastes, become aware of them, and ask the environment, not necessarily explicitly since our preferences are known and our intentions can be predicted, to respond in an appropriate way to evoke or continue an interaction.Interaction in AR and with AR technology requires a new look at interaction. Are we interacting with the AR device, with the environment, or with the environment through the AR device? Part of what we perceive is real, part of what we perceive is superimposed on reality, and part of what we perceive is the interaction between real and virtual reality. How to interact with this mix of realities? Additionally, our HMD AR provides us with view changes because of position and head orientation or gaze changes. We interact with the device with, for example, speech and hand gestures, we interact with the environment with, for example, pose changes, and we interact with the virtual content with interaction modalities that are appropriate for that content: push a virtual block, open a virtual door, or have a conversation with a virtual hu-man that inhabits the AR world. In addition, we can think of interactions that be-come possible because technology allows us to get access and act upon sensor information that cannot be perceived with our natural perception receptors. In a ubiquitous computing environment, our AR device can provide us with a 360 degrees view of our environment, drones can feed us with information from above, infrared sensors know about people and events in the dark, our car receives visual information about not yet visible vehicles approaching an intersection [5], sound frequencies be-yond the human ear can be made accessible, smell sensors can enhance the human smell sense, et cetera.In this paper, we investigate the characteristics of interactions in AR and relate them to the regular human-computer interaction characteristics (interacting with tools) [6], interaction with multimedia [7] interaction through behavior [8], implicit interaction [9], embodied interaction [10], fake interaction [11], and interaction based on Gibson’s visual perception theory [12]. This will be done from the point of view of ever-present AR [13] with optical see-through wearable devices.References could not be included because of space limitations.