Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations 2019
DOI: 10.18653/v1/p19-3027
|View full text |Cite
|
Sign up to set email alerts
|

Texar: A Modularized, Versatile, and Extensible Toolkit for Text Generation

Abstract: We introduce Texar, an open-source toolkit aiming to support the broad set of text generation tasks that transform any inputs into natural language, such as machine translation, summarization, dialog, content manipulation, and so forth. With the design goals of modularity, versatility, and extensibility in mind, Texar extracts common patterns underlying the diverse tasks and methodologies, creates a library of highly reusable modules, and allows arbitrary model architectures and algorithmic paradigms. In Texar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
27
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 38 publications
(28 citation statements)
references
References 57 publications
1
27
0
Order By: Relevance
“…We largely follow the same training and inference setups as in Radford et al (2018) for the GPT model and Radford et al (2019) for the GPT2 variants. Experiments are implemented with the text generation toolkit Texar (Hu et al, 2019). We provide more details in Appendix B.…”
Section: Hyperparametersmentioning
confidence: 99%
“…We largely follow the same training and inference setups as in Radford et al (2018) for the GPT model and Radford et al (2019) for the GPT2 variants. Experiments are implemented with the text generation toolkit Texar (Hu et al, 2019). We provide more details in Appendix B.…”
Section: Hyperparametersmentioning
confidence: 99%
“…For fair comparison, we use Texar Hu et al (2018) as our codebase to implement all proposed methods and baselines, and to compared models for IWSLT 2014 and gigawords. The performance of our implementation is at least comparable with the reported results.…”
Section: Datasets and Setupmentioning
confidence: 99%
“…Table 4 shows performance comparisons of seven baselines and our model. We implement all baseline models on Texar Hu et al (2018) under the same setup as ours. Our model achieves the highest score on the test dataset on ROUGE-1, -2, -L metrics consistently.…”
Section: Abstractive Summarizationmentioning
confidence: 99%
“…It is built in C++ and designed for fast training in multi-GPU systems. Texar (Hu et al, 2018) is a text generation toolkit affiliated with Carnegie Mellon University, featuring a similar emphasis on modularity to ICE-CAPS. It includes reinforcement learning capabilities alongside its sequence modelling tools.…”
Section: Related Toolkitsmentioning
confidence: 99%