2020
DOI: 10.1016/j.jbi.2020.103396
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating sentence representations for biomedical text: Methods and experimental results

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
18
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 32 publications
(20 citation statements)
references
References 42 publications
2
18
0
Order By: Relevance
“…As seen in Table 9, we could not single out a particular contextualized word embedding to utilize, as the use of word embedding may vary according to the various reasons: type of task (OIE, RE, or sentiment analysis), dataset domain (news, bio-medical data, or financial data), and the computational power available to the user. This is also in agreement with other papers that extensively compared embeddings in various tasks and found that the most suitable one is highly dependent on the task and data nature [51,52]. Table 9.…”
Section: Relation Extraction Results Discussionsupporting
confidence: 91%
“…As seen in Table 9, we could not single out a particular contextualized word embedding to utilize, as the use of word embedding may vary according to the various reasons: type of task (OIE, RE, or sentiment analysis), dataset domain (news, bio-medical data, or financial data), and the computational power available to the user. This is also in agreement with other papers that extensively compared embeddings in various tasks and found that the most suitable one is highly dependent on the task and data nature [51,52]. Table 9.…”
Section: Relation Extraction Results Discussionsupporting
confidence: 91%
“…For example, the Pearson correlation score achieved by the RoBERTa-mimic was 0.8705; however, the RoBERTa-base yielded a higher performance of 0.8778. Tawfik et al [45] have similarly observed that the PubMed pretrained BioBERT did not outperform the corresponding general BERT model pretrained using English text on clinical STS. In the clinical STS task, using STS-General (an STS corpus annotated in the general English domain) as an extra training set in addition to STS-Clinic could efficiently improve performances for transformer-based models.…”
Section: Experiments Findingsmentioning
confidence: 99%
“…Sentence embeddings have been evaluated with several tasks regarding NLP in the medical domain. For example, the authors of [36] performed a comprehensive evaluation of different sentence embedding based models for different tasks, such as semantic similarity, question answering or text-classification. Although some of the models evaluated showed promising results, there was no clear winner that beat the other models for all the tasks.…”
Section: Feature Engineeringmentioning
confidence: 99%