As robots become more common, it becomes increasingly useful for many applications to use them in teams that sense the world in a distributed manner. In such situations, the robots or a central control center must communicate and fuse information received from multiple sources. A key challenge for this problem is perceptual heterogeneity, where the sensors, perceptual representations, and training instances used by the robots differ dramatically. In this paper, we use Gärdenfors' conceptual spaces, a geometric representation with strong roots in cognitive science and psychology, in order to represent the appearance of objects and show how the problem of heterogeneity can be intuitively explored by looking at the situation where multiple robots differ in their conceptual spaces at different levels. To bridge low-level sensory differences, we abstract raw sensory data into properties (such as color or texture categories), represented as Gaussian Mixture Models, and demonstrate that this facilitates both individual learning and the fusion of concepts between robots. Concepts (e.g. objects) are represented as a fuzzy mixture of these properties. We then treat the problem where the conceptual spaces of two robots differ and they only share a subset of these properties. In this case, we use joint interaction and statistical metrics to determine which properties are shared. Finally, we show how conceptual spaces can handle the combination of such missing properties when fusing concepts received from different robots. We demonstrate the fusion of information in real-robot experiments with a Mobile Robots Amigobot and Pioneer 2DX with significantly different cameras and (on one robot) a SICK lidar.