Abstract. Text-based games are environments in which defining the world, the representation of the world to the player (hereafter, agent) and agent interactions with the environment are all through text. Text-based games expose abstract, executable representations of indoor spaces through verbally referenced concepts. Yet, the ability of text-based games to represent indoor environments of real-world complexity is currently limited due to insufficient support for complex space decomposition and space interaction concepts. This paper suggests a procedure to automate the mapping of real-world geometric floorplan information into text-based game environment concepts, using the Microsoft TextWorld game platform as a case. To capture the complexities of indoor spaces, we enrich existing TextWorld concepts supported by theoretical navigation concepts.We first decompose indoor spaces using skeletonization, and then identify formal space concepts and their relationships. We further enhance the spectrum of supported agent interactions with an extended grammar, including egocentric navigation instructions. We demonstrate and discuss these new capabilities in an evacuation scenario. Our implementation extends the capabilities of TextWorld to provide a research testbed for spatial research, including symbolic spatial modelling, interaction with indoor spaces, and agent-based machine learning and language processing tasks.
When a person moves, the set of objects in their visual range changes. Hence, the set of objects perceived from a specific range of locations may be considered as a signature (possibly non-unique) of this region and be used for the localization of this person. In the case of fixed objects, the number of regions with a set of specific visible objects is limited. A verbal description containing references to elements of this set of visible objects could then be used to localize a person in the space. This paper proposes an approach for decomposing a space into regions that are characterized by such sets of visible objects. In our approach, at least a portion of partial surfaces of an object must be visible (beyond single points) to make part of the signature. Our method calculates two-dimensional visibility polygons for a portion of an object’s surface. Overlaying these polygons, we partition the space in regions of visibility signatures. The approach has been implemented, and we demonstrate how to represent space by qualitative locations using these visibility signatures. We further show how this representation can be used to locate a person within a space by a set of visible objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.