Continuous sign language recognition (CSLR) is a challenging research task due to the lack of accurate annotation on the temporal sequence of sign language data. The recent popular usage is a hybrid model based on "CNN + RNN" for CSLR. However, when extracting temporal features in these works, most of the methods using a fixed temporal receptive field and cannot extract the temporal features well for each sign language word. In order to obtain more accurate temporal features, we propose a multiscale temporal network. The network mainly consists of three parts. The ResNet and two fully connected layers constitute the frame-wise feature extraction part. The time-wise feature extraction part performs temporal feature learning by first extracting temporal receptive field features of different scales using the proposed multiscale temporal block (MST-block) to improve the temporal modeling capability, and then further encoding the temporal features of different scales by the transformers module to obtain more accurate temporal features. Finally, the proposed multilevel connectionist temporal classification (CTC) loss part is used for training to obtain recognition results. The multilevel CTC loss enables better learning and updating of the shallow network parameters in CNN, which has no parameter increase and can be flexibly embedded in other models. Experimental results on two publicly available datasets demonstrate that the method can extract sign language features in an end-to-end manner without any prior knowledge effectively, improve the accuracy of CSLR, and achieve competitive results.