2021
DOI: 10.1093/bioinformatics/btab702
|View full text |Cite
|
Sign up to set email alerts
|

BioVAE: a pre-trained latent variable language model for biomedical text mining

Abstract: Summary Large scale pre-trained language models (PLMs) have advanced state-of-the-art (SOTA) performance on various biomedical text mining tasks. The power of such PLMs can be combined with the advantages of deep generative models. These are examples of these combinations. However, they are trained only on general domain text, and biomedical models are still missing. In this work, we describe BioVAE, the first large scale pre-trained latent variable language model for the biomedical domain, w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…Especially with the development of large language models like BERT and GPT, the extraction of specific information has become more efficient [29]. Currently, pre-trained models tailored for biomedical literature, such as bioBERT [30], BioVAE [31], bioGPT [32] and Bioformer [33] have been successively developed.…”
Section: Introductionmentioning
confidence: 99%
“…Especially with the development of large language models like BERT and GPT, the extraction of specific information has become more efficient [29]. Currently, pre-trained models tailored for biomedical literature, such as bioBERT [30], BioVAE [31], bioGPT [32] and Bioformer [33] have been successively developed.…”
Section: Introductionmentioning
confidence: 99%