“…Representative autoregressive language models are word2vec (Mikolov et al , 2013), Glove (Pennington et al , 2014), ELMO (Peters et al , 2018), GPT (Radford et al , 2018), GPT-2 (Radford et al , 2019) and XLNet (Yang et al , 2019), and they are more suitable for text generation task. Representative autoencoding language models are Bert (Devlin et al , 2018), Bert-wwm (Cui et al , 2019), RoBERTa (Liu et al , 2019), ALBERT (Lan et al , 2019), ERNIE (Sun et al , 2019a), ERNIE-2 (Sun et al , 2019b) and ELECTRA (Clark et al , 2020), and they are more suitable for entity and relation extraction.…”