2020
DOI: 10.1016/j.knosys.2020.105964
|View full text |Cite
|
Sign up to set email alerts
|

Biomedical-domain pre-trained language model for extractive summarization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 49 publications
(13 citation statements)
references
References 19 publications
0
13
0
Order By: Relevance
“…Du et al proposed a BioBERTSum [20]model to obtain sentence and token-level contextual representation. A domain-aware bidirectional language model pre-trained on a large-scale biomedical corpus is utilized to employ this model.…”
Section: B Other Approaches In Text Summarizationmentioning
confidence: 99%
“…Du et al proposed a BioBERTSum [20]model to obtain sentence and token-level contextual representation. A domain-aware bidirectional language model pre-trained on a large-scale biomedical corpus is utilized to employ this model.…”
Section: B Other Approaches In Text Summarizationmentioning
confidence: 99%
“…Extractive summarization: Du et al [51], Moradi et al [162], Padmakumar et al [174], Kanwal et al [101], Dan et al [44], Song et al [220]. Abstractive summarization: Wallace et al [247], Gharebagh et al [62].…”
Section: Question Answeringmentioning
confidence: 99%
“…To explore the advanced pre-trained language models in the text summarizing of biomedical domain, the domain knowledge is incorporated by existing methods via domain fine-tuning. For biomedical extractive summarization, Du et al [51] proposed a novel model BioBERTSum, which used the domain-aware pre-trained language model as the encoder and then fine-tune it on the biomedical extractive summarization task. Gharebagh et al [62] utilized the domain knowledge-the salient medical ontological terms to help the content selection of the SciBERT based clinical abstractive summarization model.…”
Section: Text Summarizationmentioning
confidence: 99%
“…One is to convert the input text into character vectors; the other is to convert the input text into word vectors. In order to improve the performance of medical entity recognition, we use the pretraining model ELMO [43] to generate the required character and word vectors discussed in this paper.…”
Section: Input Layermentioning
confidence: 99%