Recent studies on AMR-to-text generation often formalize the task as a sequence-tosequence (seq2seq) learning problem by converting an Abstract Meaning Representation (AMR) graph into a word sequence. Graph structures are further modeled into the seq2seq framework in order to utilize the structural information in the AMR graphs. However, previous approaches only consider the relations between directly connected concepts while ignoring the rich structure in AMR graphs. In this paper we eliminate such a strong limitation and propose a novel structure-aware selfattention approach to better modeling the relations between indirectly connected concepts in the state-of-the-art seq2seq model, i.e., the Transformer. In particular, a few different methods are explored to learn structural representations between two concepts. Experimental results on English AMR benchmark datasets show that our approach significantly outperforms the state of the art with 29.66 and 31.82 BLEU scores on LDC2015E86 and LDC2017T10, respectively. To the best of our knowledge, these are the best results achieved so far by supervised models on the benchmarks.
Abstract-Using group theory, we analyze cycle GF(2 p ) codes that use Cayley graphs as their associated graphs. First, we show that through row and column permutations the parity check matrix H can be put in a concatenation form of row-permuted block-diagonal matrices. Encoding utilizing this form can be performed in linear time and in parallel. Second, we derive a rule to determine the nonzero entries of H and present determinate and semi-determinate codes. Our simulations show that the determinate and semi-determinate codes have better performance than codes with randomly generated nonzero entries for GF(16) and GF(64), and have similar performance for GF(256). The constructed determinate and semi-determinate codes over GF(64) and GF(256) can outperform the binary irregular counterparts of the same block lengths. One distinct advantage for determinate and semi-determinate codes is that they greatly reduce the storage cost of H for decoding. The results in this correspondence are appealing for the implementation of efficient encoders and decoders for this class of promising LDPC codes, especially when the block length is large.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.