2019
DOI: 10.1016/j.yjbinx.2019.100058
|View full text |Cite
|
Sign up to set email alerts
|

Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 34 publications
0
7
0
Order By: Relevance
“…The work presented in [ 20 ] focused on identifying the similarity between outcomes reported in the scientific literature. To do so, this team annotated outcomes in a corpus of texts about clinical trials from PubMed Central; these data were later used to train deep learning algorithms (BERT-based models, [ 21 ]) for automatic similarity assessment.…”
Section: Related Workmentioning
confidence: 99%
“…The work presented in [ 20 ] focused on identifying the similarity between outcomes reported in the scientific literature. To do so, this team annotated outcomes in a corpus of texts about clinical trials from PubMed Central; these data were later used to train deep learning algorithms (BERT-based models, [ 21 ]) for automatic similarity assessment.…”
Section: Related Workmentioning
confidence: 99%
“…For example, the identification of functional links between proteins has been recently conducted by fine-tuning weights from BioBERT [44]. Besides, several research manuscripts have reported better outcomes when the BioBERT model is implemented [47][48][49][50] in the literature.…”
Section: Biobert Modelmentioning
confidence: 99%
“…Among five well-known methods, the BERT showed the best performances for normalization of procedure and diagnosis. In addition, in [57], the authors presented a BERT-based model to measure semantic similarity of clinical trial outcomes. Moreover, another text analysis approach for medical applications was proposed by Zhang et al [58] using BERT.…”
Section: Applications Of Bidirectional Encoder Representations From Transformers (Bert)mentioning
confidence: 99%