2022
DOI: 10.1093/bioinformatics/btac001
|View full text |Cite
|
Sign up to set email alerts
|

STonKGs: a sophisticated transformer trained on biomedical text and knowledge graphs

Abstract: Motivation The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited. Results … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 22 publications
0
8
0
Order By: Relevance
“…At the same time, in terms of published articles, Lehmann ranked the top in the ranking list with 17 articles, and he mainly concentrated on the analysis of semantic question-answering systems and link prediction of knowledge graphs [26], [27]. Fernández took second place with 12 publications, which mainly specialized in the application of knowledge graph embedding models and natural language processing in the domain of biomedical literature [28]. What's more, Tan and the other two authors tied for third place with 11 articles, and he investigated generic knowledge graph embedding models and knowledge graph representation because his most contributed topics were domain relation extraction and relational knowledge prediction [29], [30].…”
Section: Authorsmentioning
confidence: 99%
“…At the same time, in terms of published articles, Lehmann ranked the top in the ranking list with 17 articles, and he mainly concentrated on the analysis of semantic question-answering systems and link prediction of knowledge graphs [26], [27]. Fernández took second place with 12 publications, which mainly specialized in the application of knowledge graph embedding models and natural language processing in the domain of biomedical literature [28]. What's more, Tan and the other two authors tied for third place with 11 articles, and he investigated generic knowledge graph embedding models and knowledge graph representation because his most contributed topics were domain relation extraction and relational knowledge prediction [29], [30].…”
Section: Authorsmentioning
confidence: 99%
“…Nerella et al provided a thorough survey on Transformers in healthcare, highlighting their versatility across fields like genomics and patient care [4]. Balabin introduced Multimodal Transformers, showcasing their capability to manage complex biomedical data [5]. Thirunavukarasu et al discussed LLMs in medicine, emphasizing the necessity for models that comprehend complex medical terminology [6].…”
Section: Related Workmentioning
confidence: 99%
“…CoVEffect stems from this thread of works, but it is carefully adapted to solve a more complex task: that of predicting a series of tuples from SARS-CoV-2–related abstracts where we consider a variation, its effect, and the change of its level. Each of the currently available systems supports only one user-driven annotation [ 47 ], predictions of single independent annotations with ontological terms [ 48 ], or biomedical general-purpose triplets based on existing knowledge graphs [ 49 ], especially targeted to protein–protein interactions [ 50 ]. These correspond to different tasks than the one performed by CoVEffect, and the described approaches do not allow for online modifications of the training dataset.…”
Section: Related Workmentioning
confidence: 99%