Lipreading refers to recognizing the speaker's speech content through the image sequence of lip movement without the speech signal. Currently, most models use a spatiotemporal (3D) convolutional layer combined with 2D CNN to extract spatial and temporal features from image sequences. However, compared with 2D convolutional layers, which can extract fine-grained spatial features from the spatial domain, the single-layer 3D convolutional layer used in the model cannot extract temporal information well. This point is improved in this paper. Firstly, the Time Shift Module (TSM) is applied to two different front-ends (full 2D CNN based and mixture of 2D and 3D convolution) to enhance the ability of time information extraction. Secondly, the influence of different shift proportion of TSM and different sampling interval input on extracting time information is verified. Thirdly, the influence of different time shifts on the ability of spatiotemporal feature extraction is compared. The proposed method verified on two challenging word-level lipreading datasets LRW and LRW-1000 and achieved new state-of-theart performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.