This chapter introduces a model which connects representations of the space surrounding a virtual humanoid's body with the space it shares with several interaction partners. This work intends to support virtual humans (or humanoid robots) in near space interaction and is inspired by studies from cognitive neurosciences on the one hand and social interaction studies on the other hand. We present our work on learning the body structure of an articulated virtual human by using data from virtual touch and proprioception sensors. The results are utilized for a representation of its reaching space, the so-called peripersonal space, a concept known from cognitive neuroscience.In interpersonal interaction involving several partners, their peripersonal spaces may overlap and establish a shared reaching space. We define it as their interaction space, where cooperation takes place and where actions to claim or release spatial areas have to be adapted, to avoid obstructions of the other's movements. Our model of interaction space is developed as an extension of Kendon's F-formation system, a foundational theory of how humans orient themselves in space when communicating. Thus, interaction space allows for measuring not only the spatial arrangement (i.e., body posture and orientation) between multiple interaction partners, but also the extent of space they share. Peripersonal and interaction space are modeled as potential fields to control the virtual human's behaviour strategy. As an example we show how the virtual human can relocate object positions toward or away from locations reachable for all partners, and thus facilitating cooperation in an interaction task. In this chapter we shall demonstrate how a virtual human can cooperate with a partner in building a toy tower together, as one aspect of computationally modelling shared space and spatial behaviour for action coordination. Virtual humans are autonomous agents with human-like appearance and usually human-like multi-modal behaviour like speech, gaze, gestures, and facial expressions. In three-dimensional virtual reality environments, virtual humans can interact with other virtual humans or with real humans. For example, virtual humans like Max (Kopp et al., 2003) can act as co-situated guides in a construction task, or Steve (Rickel and Johnson, 2000), who act as tutors demonstrating physical tasks to students.In the mentioned scenarios, overlapping workspaces were usually avoided by maintaining enough distance between the partners to avoid interferences between their movements. We believe that in natural interaction such interferences have to be dealt with to accomplish cooperative interaction tasks. Thus, we present our work on modelling a virtual human's spatial behaviour in shared near space interactions in order to facilitate the accomplishment of and the partner's engagement in the cooperative task.Spatial interaction in tasks carried out in distances near to the agent's body usually pose a great challenge to virtual humans. In contrast, humans seem to...