Abstract. Short text comprehension summary generation is currently a hot issue. In this paper, we improve the attention mechanism under the framework of encoder-decoder and proposes a comprehensible short text abstract generation model that integrates the global and local semantic information. The model consists of a dual encoder and a decoder. The dual encoder structure can combine the global and local semantic information and fully obtain the abstract features of the original text. And the improved mechanism can adaptively combine all information of short text to provide the input with summary characteristics for the decoder, so that the decoder can more accurately focus on the core content of the source text. In this paper, LCSTS dataset is used to train and test the model. The experimental results show that compared with the Seq2Seq and Seq2Seq with standard attention models, the proposed method can produce high-quality summary which consists of less repetitive words and performs better evaluation value in ROUGE.
Short text information is less, and short text comprehension abstract generation is currently a hot and difficult issue. We proposed an understanding-based short text summary generation model that combines multi-level semantic information. We improved the structure of encoder in the framework of encoder-decoder which consists of self-attention mechanism and selective network to focuses on the multi-level semantic information. Then our model fully exploits highlevel global semantics and shallow semantic information of internal words of the text, and organically fuses the hidden state of the decoder and the original through two different attention mechanisms. The high-level and shallow semantic information adaptively provide the decoder with a syntactic semantic vector with abstract characteristics, so that the decoder can more accurately focus on the core content of the article. This paper selects the LCSTS data set for model training and testing. The experimental results show that compared with Seq2Seq, Seq2Seq with standard Attention, and Transformer model, the proposed model generates a Chinese short text summary with higher quality and performs better evaluation value in ROUGE.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.