Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.189
|View full text |Cite
|
Sign up to set email alerts
|

Extracting Chemical-Protein Interactions via Calibrated Deep Neural Network and Self-training

Abstract: The extraction of interactions between chemicals and proteins from several biomedical articles is important in many fields of biomedical research such as drug development and prediction of drug side effects. Several natural language processing methods, including deep neural network (DNN) models, have been applied to address this problem. However, these methods were trained with hard-labeled data, which tend to become over-confident, leading to degradation of the model reliability. To estimate the data uncertai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 31 publications
0
1
0
Order By: Relevance
“…where β controls the strength of the confidence penalty. Thus, the incorporation of negative entropy into the original loss function enables overfitting and improves the generalization performance [49]. Similar to previous method [39], we defined the probability of label '1' obtained from the softmax function as a ranking BERT-score (score RB ) of each mention-candidate pair (m, n) as follows:…”
Section: A Candidate Concept Generationmentioning
confidence: 99%
“…where β controls the strength of the confidence penalty. Thus, the incorporation of negative entropy into the original loss function enables overfitting and improves the generalization performance [49]. Similar to previous method [39], we defined the probability of label '1' obtained from the softmax function as a ranking BERT-score (score RB ) of each mention-candidate pair (m, n) as follows:…”
Section: A Candidate Concept Generationmentioning
confidence: 99%