Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017) 2017
DOI: 10.18653/v1/k17-1032
|View full text |Cite
|
Sign up to set email alerts
|

Learning local and global contexts using a convolutional recurrent network model for relation classification in biomedical text

Abstract: The task of relation classification in the biomedical domain is complex due to the presence of samples obtained from heterogeneous sources such as research articles, discharge summaries, or electronic health records. It is also a constraint for classifiers which employ manual feature engineering. In this paper, we propose a convolutional recurrent neural network (CRNN) architecture that combines RNNs and CNNs in sequence to solve this problem. The rationale behind our approach is that CNNs can effectively iden… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
23
0
4

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(28 citation statements)
references
References 29 publications
1
23
0
4
Order By: Relevance
“…However, the improved performance is still worse than our model. Thirdly, among the baselines, it is interesting to note that without entity type features, attentionbased pooling technique performs worse than conventional max-pooling strategy, which has also been observed earlier by Sahu and Anand [42] and Raj et al [13], while with entity type features, attention-based pooling technique performs better than conventional max-pooling strategy.…”
Section: Comparisons With Baseline Methods Of Relation Extractionsupporting
confidence: 64%
“…However, the improved performance is still worse than our model. Thirdly, among the baselines, it is interesting to note that without entity type features, attentionbased pooling technique performs worse than conventional max-pooling strategy, which has also been observed earlier by Sahu and Anand [42] and Raj et al [13], while with entity type features, attention-based pooling technique performs better than conventional max-pooling strategy.…”
Section: Comparisons With Baseline Methods Of Relation Extractionsupporting
confidence: 64%
“…Two major neural architectures for the task include the convolutional neural networks, CNNs, (Zeng et al, 2014;Nguyen and Grishman, 2015;Zeng et al, 2015;Lin et al, 2016;Jiang et al, 2016;Zeng et al, 2017;Huang and Wang, 2017) and long short-term memory networks, LSTMs (Miwa and Bansal, 2016;Zhang et al, 2017;Katiyar and Cardie, 2017;Ammar et al, 2017). We also find combinations of those two architectures Raj et al, 2017).…”
Section: Introductionmentioning
confidence: 90%
“…When doing experiments on this dataset, the previous methods [12,28,29] Positive relations were annotated in both relation datasets, and samples of negative relation types (starting with "N" in this table) were extracted to ensure each concept pair within a sentence could be assigned a certain relation type.…”
Section: I2b2/va Relation Datasetmentioning
confidence: 99%
“…The most direct way is to use the voting scheme [10]. The second combination way is to feed features extracted by a RNN architecture into CNN [11,12], which can be seen as generating new input representations by RNN. The third way is to stack RNN on CNN.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation