Because imitation has shown to be able to drastically cut down the exploration space it has been embraced largely by the robotics community to speed up autonomous learning. This has been done mostly in fixed demonstratorimitator relationships, where a predefined demonstrator is performing the same action over and over again until the robot has learned it sufficiently. In practical multi-robot scenarios, however, the imitation process should not interrupt the observed robot as it will affect its autonomy. Therefore the imitating robot often has only a limited amount of behaviors of the same type to learn from. As this usually does not provide enough information for learning a generalized version of observed lowlevel actions, it can help the observer to optimize its strategy based on the recognized actions. In this case, first of all the imitating robot has to interpret the observed behavior in terms of its own behavior knowledge. This means that low-level observations have to be segmented into episodes of apparently similar behavior. For these episodes corresponding skills in the observing robot's own behavior repertoire have to be found. And for the episodes sequence corresponding state changes in the observing robot's own strategy have to be determined. How the former can be done has been shown by the authors in a previous paper. In this paper we will describe how our approach for Evolving Societies of Learning Autonomous Systems (ESLAS) is extended to be capable of multi-robot imitation. We demonstrate how the recognition results can be used in a multi-robot scenario to decentrally align the robots' behaviors, to speed up the overall learning process, and thus increase the overall autonomy.