Image deraining is a challenging task that involves restoring degraded images affected by rain streaks. While Convolutional Neural Networks (CNNs) have been commonly used for this task, existing approaches often rely on stacked convolutional basic blocks with limited performance and compromised spatial detail. Furthermore, the limited receptive field of convolutional layers leads to incomplete processing of non-uniform rain streaks. To address these concerns, we propose a novel image deraining network that combines CNNs and transformers. Our network comprises two stages: an encoder-decoder architecture with a triple attention mechanism to capture valuable features and residual dual branch transformer blocks that enhance local information modeling. To address the transformer's lack of local information modeling capability, we introduce convolution in the self-attentive mechanism of the transformer block and feed-forward network. Additionally, we employ a frequency domain contrastive learning method to enhance contrastive sample information, ensuring that the restored image closely resembles the clear image in the frequency domain space, while still retaining a distinction from the rainy image. Extensive quantitative and qualitative experiments demonstrate that our proposed deraining networkoutperforms existing methods on public datasets.