2022
DOI: 10.1016/j.jbi.2022.104040
|View full text |Cite
|
Sign up to set email alerts
|

Question-aware transformer models for consumer health question summarization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 28 publications
0
13
0
Order By: Relevance
“…Sanchez-Gomez et al (2022) developed a query-focused multi-objective memetic algorithm to generate extractive summaries for medical texts. Yadav et al (2022) designed a consumer health question summarization approach by introducing various cloze tasks to pre-trained transformer models, aiming for better coverage of the question focus in the summarized questions. Additionally, Khatter and Ahlawat (2022) proposed a hybrid model that integrates self-attention into the bi-directional LSTM auto-encoder to generate information-rich abstracts through extractive summarization.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Sanchez-Gomez et al (2022) developed a query-focused multi-objective memetic algorithm to generate extractive summaries for medical texts. Yadav et al (2022) designed a consumer health question summarization approach by introducing various cloze tasks to pre-trained transformer models, aiming for better coverage of the question focus in the summarized questions. Additionally, Khatter and Ahlawat (2022) proposed a hybrid model that integrates self-attention into the bi-directional LSTM auto-encoder to generate information-rich abstracts through extractive summarization.…”
Section: Related Workmentioning
confidence: 99%
“…Recent advancements in this process have been reported, including single-document (Mao et al , 2019), multi-document (Agarwal and Chatterjee, 2022), query-focused (Sanchez-Gomez et al , 2022) text summarization and so on. Specifically, prior studies have attempted to apply text summarization techniques to the health-care domain, such as electronic health records (Pivovarov and Elhadad, 2015) and question answering (Yadav et al , 2022). Although these studies were designed to help health care professionals or consumers with health-related document retrieval, extraction or classification, they did not consider how to assist lay people (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…Transformers Transformer is a deep learning architecture proposed by Google in 2017. It has achieved great success in the field of NLP [44][45][46]. Due to the unique attention mechanism and the excellent performance in the field of NLP, researchers have great interest in its application in trajectory prediction.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years the pre-trained models have shown notable performance on many downstream Natural Language Processing (NLP) tasks such as Question-Answering(QA), summarization, machine translation, sentiment analysis, etc. [1], [2], [3], [4], [5], [6]. To use the pre-train models for the task other than the one on which it has been trained [7], fine-tuning on the task-specific supervised dataset is required.…”
Section: Introductionmentioning
confidence: 99%