2018
DOI: 10.48550/arxiv.1809.00794
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Texar: A Modularized, Versatile, and Extensible Toolkit for Text Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 25 publications
0
10
0
Order By: Relevance
“…Our proposed GTAE achieves the best content preservation ability as expected, while maintaining highly comparable performance on naturalness and style transfer intensity with the state-of-the-arts. The model is implemented using the Texar [41] tookit for text generation based on the Tensorflow backend [42].…”
Section: Methodsmentioning
confidence: 99%
“…Our proposed GTAE achieves the best content preservation ability as expected, while maintaining highly comparable performance on naturalness and style transfer intensity with the state-of-the-arts. The model is implemented using the Texar [41] tookit for text generation based on the Tensorflow backend [42].…”
Section: Methodsmentioning
confidence: 99%
“…We used an encoder-decoder sequence-to-sequence architecture with bidirectional forwardbackward RNN encoder and an attention-based RNN decoder [23], as implemented in PyTorch-Texar [24]. While this architecture is no longer the top performer in terms of ROUGE metric -currently, large pre-trained self-attention models such as BERT are the state-of-the-art [25] -it is much more efficient in training.…”
Section: Methodsmentioning
confidence: 99%
“…We implement our experiments using Tensorflow through the Texar platform [15]. We use a three-layer Transformer with default eight heads in the encoder and decoder.…”
Section: Experimental Settingsmentioning
confidence: 99%