Sleep recognition refers to detection or identification of sleep posture, state or stage, which can provide critical information for the diagnosis of sleep diseases. Most of sleep recognition methods are limited to single-task recognition, which only involves single-modal sleep data, and there is no generalized model for multi-task recognition on multi-sensor sleep data. Moreover, the shortage and imbalance of sleep samples also limits the expansion of the existing machine learning methods like support vector machine, decision tree and convolutional neural network, which lead to the decline of the learning ability and overfitting. Self-supervised learning technologies have shown their capabilities to learn significant feature representations. In this paper, a novel self-supervised learning model is proposed for sleep recognition, which is composed of an upstream self-supervised pre-training task and a downstream recognition task. The upstream task is conducted to increase the data capacity, and the information of frequency domain and the rotation view are used to learn the multi-dimensional sleep feature representations. The downstream task is undertaken to fuse bidirectional long-short term memory and conditional random field as the sequential data recognizer to produce the sleep labels. Our experiments shows that our proposed algorithm provide promising results in sleep identification and can further be applied in clinical and smart home environments as a diagnostic tool. The source code is provided at: ''https://github.com/zhaoaite/SSRM ''.