For intelligent virtual humans to interact in a rich environment, connections between the usable actions and objects must be defined. For scenarios consisting of hundreds of object types and many actions, having a simulation author create these connections by hand becomes and difficult. This limitation causes the simulation to have only a few object connections, whereas the rest of the objects are static and cannot be acted upon. Automated methods to connect objects to actions are promising but incomplete as they still miss key interactions and are unable to determine important characteristics of each action such as the manner of the action. We present Agents Learning their Environment through Text (ALET) to connect graphical objects to actions and provide an understanding of the actions. Observing that text contains descriptions of actions and connections to objects, ALET leverages large text corpora, and knowledge bases to provide candidate connections and descriptions for the actions and objects available in a simulation. The candidate connections and descriptions populate a representation that describes the graphical models and animations. We test ALET against other semantic generation methods, and find it more accurately determines object roles in actions, as well as determines semantic information about the action.
KEYWORDS autonomous actors, intelligent virtual humans, semantic virtual environmentsComput Anim Virtual Worlds. 2017;28:e1759.wileyonlinelibrary.com/journal/cav