Sequential recommendation (SR) predicts the user’s future preferences based on the sequence of interactions. Recently, some methods for SR have utilized contrastive learning to incorporate self-supervised signals into SR to alleviate the data sparsity problem. Despite these achievements, they overlook the fact that users’ multi-behavior interactions in real-world scenarios (e.g., page view, favorite, add to cart, and purchase). Moreover, they disregard the temporal dependencies in users’ preferences and their influence on attribute information, leading to models that struggle to accurately capture users’ personalized preferences. Therefore, we propose a multi-behavior collaborative contrastive learning for sequential recommendation model. First, we introduce both user-side and item-side attribute information and design an attribute-weight-enhanced attention in multi-behavior interaction scenarios. It enhances the model’s ability to capture user’s multi-behavior preferences while considering the influence of attribute information. Second, in order to capture users’ fine-grained temporal preferences. We divide the interaction sequences into different time scales based on the users’ multi-behavior interaction timestamps. In addition, introduce temporal aware attention to generate temporal embeddings for different time scales and effectively integrate them with the user’s multi-behavior embeddings. Finally, we design collaborative contrastive learning, which collaboratively captures users’ multi-behavior personalized preferences from both temporal and attribute perspectives. This approach alleviates the issue of data sparsity. We conduct extensive experiments on two datasets to validate the effectiveness and superiority of our model.