2019
DOI: 10.48550/arxiv.1909.01187
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Encode, Tag, Realize: High-Precision Text Editing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(13 citation statements)
references
References 0 publications
0
13
0
Order By: Relevance
“…Specifically, LaserTagger (Malmi et al 2019) predicts editions between keeping, deleting or adding a new token/phrase from a handcrafted vocabulary. PIE (Awasthi et al 2019) iteratively predicts token-level editions for a fixed number of iterations in an non-autoregressive way.…”
Section: Gec By Generating Editsmentioning
confidence: 99%
See 3 more Smart Citations
“…Specifically, LaserTagger (Malmi et al 2019) predicts editions between keeping, deleting or adding a new token/phrase from a handcrafted vocabulary. PIE (Awasthi et al 2019) iteratively predicts token-level editions for a fixed number of iterations in an non-autoregressive way.…”
Section: Gec By Generating Editsmentioning
confidence: 99%
“…For the English GEC task, we compare the proposed S2A model to several representative systems, including three seq2seq baselines (Transformer Big, BERT-fuse (Kaneko et al 2020), PRETLarge (Kiyono et al 2019)), four sequence tagging models (LaserTagger (Malmi et al 2019), PIE (Awasthi et al 2019), GECToR (Omelianchuk et al 2020), Seq2Edits (Stahlberg and Kumar 2020)), and a pipeline model ESD+ESC (Chen et al 2020). Specifically, for GECToR, we report their results when utilizing the pretrained BERT model, XLNet model (Yang et al 2019) and the results that integrate three different pre-trained language models in an ensemble.…”
Section: Baselinesmentioning
confidence: 99%
See 2 more Smart Citations
“…These models generate full sequences of code tokens left-to-right with any prefix acting as the (partial) user intent. While LMs generate realistic-looking outputs, they are known to occasionally "hallucinate" [27,22,23,19], i.e. generate plausible but incorrect content.…”
Section: Introductionmentioning
confidence: 99%