Recently, self-supervised pretraining has achieved impressive results in end-to-end (E2E) automatic speech recognition (ASR). However, the dominant sequence-to-sequence (S2S) E2E model is still hard to fully utilize the self-supervised pretraining methods because its decoder is conditioned on acoustic representation thus cannot be pretrained separately. In this paper, we propose a pretrained Transformer (Preformer) S2S ASR architecture based on hybrid CTC/attention E2E models to fully utilize the pretrained acoustic models (AMs) and language models (LMs). In our framework, the encoder is initialized with a pretrained AM (wav2vec2.0). The Preformer leverages CTC as an auxiliary task during training and inference. Furthermore, we design a one-cross decoder (OCD), which relaxes the dependence on acoustic representations so that it can be initialized with pretrained LM (DistilGPT2). Experiments are conducted on the AISHELL-1 corpus and achieve a 4.6% character error rate (CER) on the test set. Compared with our vanilla hybrid CTC/attention Transformer baseline, our proposed CTC/attention-based Preformer yields 27% relative CER reduction. To the best of our knowledge, this is the first work to utilize both pretrained AM and LM in a S2S ASR system.
In the case of an imbalance between positive and negative samples, hard negative mining strategies have been shown to help models learn more subtle differences between positive and negative samples, thus improving recognition performance. However, if too strict mining strategies are promoted in the dataset, there may be a risk of introducing false negative samples. Meanwhile, the implementation of the mining strategy disrupts the difficulty distribution of samples in the real dataset, which may cause the model to over-fit these difficult samples. Therefore, in this paper, we investigate how to trade off the difficulty of the mined samples in order to obtain and exploit high-quality negative samples, and try to solve the problem in terms of both the loss function and the training strategy. The proposed balance loss provides an effective discriminant for the quality of negative samples by combining a self-supervised approach to the loss function, and uses a dynamic gradient modulation strategy to achieve finer gradient adjustment for samples of different difficulties. The proposed annealing training strategy then constrains the difficulty of the samples drawn from negative sample mining to provide data sources with different difficulty distributions for the loss function, and uses samples of decreasing difficulty to train the model. Extensive experiments show that our new descriptors outperform previous state-of-the-art descriptors for patch validation, matching, and retrieval tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.