Contrastive Learning and Masked Image Modelling have demonstrated exceptional performance on self-supervised representation learning, where Momentum Contrast (i.e., MoCo) and Masked AutoEncoder (i.e., MAE) are the stateof-the-art, respectively. In this work, we propose MOMA to distill from pre-trained MOCo and MAE in a self-supervised manner to collaborate the knowledge from both paradigms. During the distillation, the teacher and the student are fed with original inputs and masked inputs, respectively. The learning is enabled by aligning the normalized representations from the teacher and the projected representations from the student. This simple design leads to efficient computation with extremely high mask ratio and dramatically reduced training epochs, and does not require extra considerations on the distillation target. The experiments show MOMA delivers compact student models with comparable performance to existing state-of-the-art methods, combining the power of both self-supervised learning paradigms. It presents competitive results against different benchmarks in computer vision. We hope our method provides an insight on transferring and adapting the knowledge from large-scale pre-trained models in a computationally efficient way.Recent studies (Chung et al., 2021) (Mishra et al., 2022 attempt to combine the power of contrastive learning and masked modelling, yielding promising results. They suggest that both paradigms are complementary with each other and can deliver stronger representations when they are combined into a unified framework. Furthermore, integrating two paradigms into one framework introduces higher computational cost, which requires extensive resources (e.g., hundreds of GPU hours, enormous memory capacity, and excessive storage requirements). It is also not energy-efficient to training different frameworks from the scratch as they