2017
DOI: 10.1186/s12859-017-1855-x
|View full text |Cite
|
Sign up to set email alerts
|

An attention-based effective neural model for drug-drug interactions extraction

Abstract: BackgroundDrug-drug interactions (DDIs) often bring unexpected side effects. The clinical recognition of DDIs is a crucial issue for both patient safety and healthcare cost control. However, although text-mining-based systems explore various methods to classify DDIs, the classification performance with regard to DDIs in long and complex sentences is still unsatisfactory.MethodsIn this study, we propose an effective model that classifies DDIs from the literature by combining an attention mechanism and a recurre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
69
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 86 publications
(70 citation statements)
references
References 26 publications
0
69
0
1
Order By: Relevance
“…Other studies have focused on data mining to identify PDDIs within titles, abstracts, and articles [16][17][18][19][20][21][22][23][24][25][26][27][28]. Included in these studies are approaches to use various kinds of machine learning including linear kernels (e.g., Support Vector Machines) [18,19,32], non-linear kernels (e.g., Graph Models) [22], random forest [16], various neural network architectures [17,21,26,33], advanced use of linguistic, parts of speech and linguistic features [19,23], unsupervised topical models [25], and semantic features from terminologies or ontologies [16,27,32,35].…”
Section: Comparison Of the Results With Prior Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Other studies have focused on data mining to identify PDDIs within titles, abstracts, and articles [16][17][18][19][20][21][22][23][24][25][26][27][28]. Included in these studies are approaches to use various kinds of machine learning including linear kernels (e.g., Support Vector Machines) [18,19,32], non-linear kernels (e.g., Graph Models) [22], random forest [16], various neural network architectures [17,21,26,33], advanced use of linguistic, parts of speech and linguistic features [19,23], unsupervised topical models [25], and semantic features from terminologies or ontologies [16,27,32,35].…”
Section: Comparison Of the Results With Prior Workmentioning
confidence: 99%
“…Included in these studies are approaches to use various kinds of machine learning including linear kernels (e.g., Support Vector Machines) [18,19,32], non-linear kernels (e.g., Graph Models) [22], random forest [16], various neural network architectures [17,21,26,33], advanced use of linguistic, parts of speech and linguistic features [19,23], unsupervised topical models [25], and semantic features from terminologies or ontologies [16,27,32,35]. In general, the goal of these sophisticated approaches is to accurately extract PDDI data from the large body of scientific literature.…”
Section: Comparison Of the Results With Prior Workmentioning
confidence: 99%
“…Attention mechanisms have recently been successfully applied to biomedical relation extraction tasks [14,18,30]. These attention networks are able to learn a vector of important weights for each word in a sentence to reflect its impact on the final result.…”
Section: Attention Mechanismsmentioning
confidence: 99%
“…However, it is very difficult for DL models to learn enough features from only sequences of sentences. Instead of learning from full sentences, attention networks have demonstrated success in a wide range of NLP tasks [25,26,27,28,29,30,31]. In addition, BGRU-Attn [18] first used the Additive attention mechanism [29] for the BB task to focus on only sections of the output from RNN instead of the entire outputs and achieved state-of-the-art performance.…”
mentioning
confidence: 99%
See 1 more Smart Citation