Communication between mobile robots requires a transfer of symbols, where each symbol signifies a meaning. However, in typical applications, meaning has been ascribed to the symbols by the engineers that have programmed the robots. This thesis explores an alternative: the use of algorithms and representations that allow mobile robots to evolve a shared set of symbols where the meanings of the symbols are derived from the robots' sensors and cognition.Mobile robots have two important properties that affect the learning of symbols, i) that they are capable of locomotion through space over time; and ii) that they come in many different configurations with different architectures. Previous work has demonstrated that mobile robots can learn shared lexicons to describe space through perceptual referents and referents grounded in cognitive maps. However, open questions remain as to how mobile robots can learn to communicate using temporal terms, and how learning lexicons is affected by different cognitive architectures.The major research question addressed in this thesis is how can mobile robots develop spatial and temporal lexicons across different cognitive architectures? Three facets of language learning are considered particularly important for robots with different cognitive architectures: i) the ability to ground terms in cognition;ii) the ability to ground identical terms in different sensors and cognition for each robot; and iii) the ability to handle referential uncertainty -the difficulty of linking words to meanings within ambiguous contexts.Pairs of mobile robots are used to develop lexicons for spatial and temporal terms and to study each of these abilities. The terms developed by the robots are tested by organizing spatial and temporal tasks and extended to additional terms through grounding transfer.In this thesis, language learning is studied within a framework defined by Peirce's semiotic triangle and building on previous Lingodroid studies. Conversations between robots are used to socially ground symbols within the robots' spatial and temporal cognition. Distributed lexicon tables are used to store links between words and meanings. As the lexicons evolve the words are analyzed for immediate usability, and the final lexicons are analyzed for coherence.Four studies to analyze different aspects of lexicon learning were completed. Study I addressed the aims of learning duration terms using mobile robots and using grounded spatial and temporal language together to perform joint tasks. Identical mobile robots were used to ground terms for time in durations using clocks (time since the last meeting). The robots were able to develop coherent lexicons, and successfully organize future meetings using learned terms.Study II addressed the aim of learning event-based temporal terms using mobile robots. Identical mobile robots were used to ground terms for time in sunlight levels (time of day). The robots required the ability to ground terms in features formed from a brightness level and its derivative. Aga...