“…Over the past few years, summarization task has witnessed a huge deal of progress in extractive (Nallapati et al, 2017;Liu and Lapata, 2019;Yuan et al, 2020;Cui et al, 2020;Jia et al, 2020;Feng et al, 2018) and abstractive (See et al, 2017;Cohan et al, 2018;Gehrmann et al, 2018;Zhang et al, 2019;Tian et al, 2019;Zou et al, 2020) [Introductory] Neural machine translation (@xcite), directly applying a single neural network to transform the source sentence into the target sentence, has now reached impressive performance (@xcite […] Motivated by recent success in unsupervised cross-lingual embeddings (@xcite), the models proposed for unsupervised NMT often assume that a pair of sentences from two different languages can be mapped to a same latent representation in a shared-latent space (@xcite) […] Although the shared encoder is vital for mapping sentences from different languages into the shared-latent space, it is weak in keeping the uniqueness and internal characteristics of each language, such as the style, terminology and sentence structure. […] For each language, the encoder and its corresponding decoder perform an AE, where the encoder generates the latent representations from the perturbed input sentences and the decoder reconstructs the sentences from the latent representations.…”