Street scene change detection continues to capture researchers' interests in the computer vision community. It aims to identify the changed regions of the paired street-view images captured at different times. The state-of-the-art network based on the encoder-decoder architecture leverages the feature maps at the corresponding level between two channels to gain sufficient information of changes. Still, the efficiency of feature extraction, feature correlation calculation, even the whole network requires further improvement. This paper proposes the temporal attention and explores the impact of the dependencyscope size of temporal attention on the performance of change detection. In addition, based on the Temporal Attention Module (TAM), we introduce a more efficient and light-weight version -Dynamic Receptive Temporal Attention Module (DRTAM)and propose the Concurrent Horizontal and Vertical Attention (CHVA) to improve the accuracy of the network on specific challenging entities. On street scene datasets 'GSV', 'TSUNAMI' and 'VL-CMU-CD', our approach gains excellent performance, establishing new state-of-the-art scores without bells and whistles, while maintaining high efficiency applicable in autonomous vehicles.