Proceedings of the International Workshop on Semantic Big Data 2020
DOI: 10.1145/3391274.3393636
|View full text |Cite
|
Sign up to set email alerts
|

Automated ontology-based annotation of scientific literature using deep learning

Abstract: Representing scienti c knowledge using ontologies enables data integration, consistent machine-readable data representation, and allows for large-scale computational analyses. Text mining approaches that can automatically process and annotate scienti c literature with ontology concepts are necessary to keep up with the rapid pace of scienti c publishing. Here, we present deep learning models (Gated Recurrent Units (GRU) and Long Short Term Memory (LSTM)) combined with di erent input encoding formats for automa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
11
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 23 publications
0
11
1
Order By: Relevance
“…Surprisingly, we found that GRU based models consistently outperformed the commonly used LSTM based architectures. Contrary to expectations, the inclusion of ontology hierarchy resulted in a modest improvement in performance [9].…”
Section: Related Workcontrasting
confidence: 78%
See 4 more Smart Citations
“…Surprisingly, we found that GRU based models consistently outperformed the commonly used LSTM based architectures. Contrary to expectations, the inclusion of ontology hierarchy resulted in a modest improvement in performance [9].…”
Section: Related Workcontrasting
confidence: 78%
“…This work was limited to predicting unigram annotations and did not take into account the rich semantic information in ontology hierarchies. Subsequent work [9] from our group improved on this by expanding the types of annotations predicted and by incorporating semantics from ontology subsumption into the prediction. Surprisingly, we found that GRU based models consistently outperformed the commonly used LSTM based architectures.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations