In recent years, spatial-temporal graph neural networks (GNNs) have significantly improved traffic prediction by modeling intricate spatiotemporal dependencies in irregular traffic networks. However, these approaches may underutilize the intrinsic properties of traffic data and suffer from overfitting due to their local nature. This paper introduces the Implicit Sensing Self-Supervised learning model (ISSS) to address these issues. ISSS leverages a multi-pretext task framework for traffic flow prediction, integrating multiple self-supervised tasks, including contrastive learning and spatial jigsaw puzzles, to learn both specific and general representations. ISSS emphasizes capturing the intrinsic spatial relationships among sensor locations, particularly through spatial and temporal jigsaw puzzles. These tasks explore the correlation between traffic patterns at sensor locations and critical temporal periods, recognizing the significance of temporal information in traffic data. By transforming data into an alternative feature space, ISSS facilitates the adoption of contrastive learning tasks, enhancing regularization, and promoting a deeper understanding of learned traffic features, resulting in more specific representations. Comparative experiments on six datasets demonstrate ISSS's effectiveness in learning general and discriminative features in both supervised and unsupervised modes. ISSS outperforms existing models, showcasing its potential for improving traffic flow prediction while addressing challenges associated with local operations and overfitting.