Findings of the Association for Computational Linguistics: EACL 2023 2023
DOI: 10.18653/v1/2023.findings-eacl.80
|View full text |Cite
|
Sign up to set email alerts
|

NapSS: Paragraph-level Medical Text Simplification via Narrative Prompting and Sentence-matching Summarization

Junru Lu,
Jiazheng Li,
Byron Wallace
et al.

Abstract: Accessing medical literature is difficult for laypeople as the content is written for specialists and contains medical jargon. Automated text simplification methods offer a potential means to address this issue. In this work, we propose a summarize-then-simplify two-stage strategy, which we call NapSS, identifying the relevant content to simplify while ensuring that the original narrative flow is preserved. In this approach, we first generate reference summaries via sentence matching between the original and t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 45 publications
0
1
0
Order By: Relevance
“…Intuitively, the PLMs pre-trained on the technical corpus can assign higher likelihoods to the technical terms than that pretrained on the general corpus. Based on this intuition, Devaraj et al53 proposed a new readability evaluation metric calculating the likelihood scores of input texts with a masked language model trained on the technical corpus. Devaraj et al57 proposed a RoBERTa-based method to classify factual errors in text simpli cation, like insertions, deletions, and substitutions.Optimization Methods Lu et al52 proposed the summarize-then-simplify method for paragraph-level medical text simpli cation, that uses narrative prompts with key phrases to encourage the factual consistency between the input and the output.…”
mentioning
confidence: 99%
“…Intuitively, the PLMs pre-trained on the technical corpus can assign higher likelihoods to the technical terms than that pretrained on the general corpus. Based on this intuition, Devaraj et al53 proposed a new readability evaluation metric calculating the likelihood scores of input texts with a masked language model trained on the technical corpus. Devaraj et al57 proposed a RoBERTa-based method to classify factual errors in text simpli cation, like insertions, deletions, and substitutions.Optimization Methods Lu et al52 proposed the summarize-then-simplify method for paragraph-level medical text simpli cation, that uses narrative prompts with key phrases to encourage the factual consistency between the input and the output.…”
mentioning
confidence: 99%