Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022) 2022
DOI: 10.18653/v1/2022.in2writing-1.14
|View full text |Cite
|
Sign up to set email alerts
|

Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision

Abstract: Revision is an essential part of the human writing process. It tends to be strategic, adaptive, and, more importantly, iterative in nature. Despite the success of large language models on text revision tasks, they are limited to non-iterative, one-shot revisions. Examining and evaluating the capability of large language models for making continuous revisions and collaborating with human writers is a critical step towards building effective writing assistants. In this work, we present a human-inthe-loop iterati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…With deep sequence to sequence models [42,239] writing assistants were able to use a combination of human-labeled and machine-labeled data with several thousand sentences or examples [160,162]. Several recent works [35,45,58,65,174,188,268] take advantage of the large language models that are already pre-trained on corpora of millions of sentences. They are then able to use small amounts of human-labeled data to tune the model (e.g., instruction tune [204,218,226]) and in many cases simply prompt the model in a zero-shot manner to help with the writing task.…”
Section: Additional Background C1 Technological Evolution In Writing ...mentioning
confidence: 99%
“…With deep sequence to sequence models [42,239] writing assistants were able to use a combination of human-labeled and machine-labeled data with several thousand sentences or examples [160,162]. Several recent works [35,45,58,65,174,188,268] take advantage of the large language models that are already pre-trained on corpora of millions of sentences. They are then able to use small amounts of human-labeled data to tune the model (e.g., instruction tune [204,218,226]) and in many cases simply prompt the model in a zero-shot manner to help with the writing task.…”
Section: Additional Background C1 Technological Evolution In Writing ...mentioning
confidence: 99%
“…Another task, SentRev, serves as an enhanced version of GEC, aiming not only to address GEC but also to polish academic English style and improve text fluency. However, despite various attempts [5,6] to advance this task from multiple perspectives, the outcomes have been less than ideal.…”
Section: Related Workmentioning
confidence: 99%
“…The Learned Refiner necessitates a training process, and the acquisition of supervised refinement typically involves pairs of feedback and refinement (Schick et al, 2022;Du et al, 2022b;Yasunaga and Liang, 2020;Madaan et al, 2021). CURIOUS (Madaan et al, 2021) initially constructs a graph that represents relevant influences.…”
Section: Learned Refinersmentioning
confidence: 99%
“…PEER (Schick et al, 2022) is an advanced collaborative language model that replicates the entire writing process, encompassing drafting, suggesting modifications, proposing edits, and providing explanations for its actions. In contrast, Read, Revise, Repeat (R3) (Du et al, 2022b) aims to achieve superior text revisions with minimal human intervention. It achieves this by analyzing model-generated revisions and user feedback, making document revisions, and engaging in repeated human-machine interactions.…”
Section: Learned Refinersmentioning
confidence: 99%