2021
DOI: 10.3390/s22010003
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Event Graph Representation and Similarity Learning on Biomedical Literature

Abstract: The automatic extraction of biomedical events from the scientific literature has drawn keen interest in the last several years, recognizing complex and semantically rich graphical interactions otherwise buried in texts. However, very few works revolve around learning embeddings or similarity metrics for event graphs. This gap leaves biological relations unlinked and prevents the application of machine learning techniques to promote discoveries. Taking advantage of recent deep graph kernel solutions and pre-tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 90 publications
0
2
0
Order By: Relevance
“…For instance, having dense formal representations is becoming more and more indispensable to avoid hallucinations in conversational agents and produce more factual and semantically coherent summaries. Notably, authors have devised unsupervised and inductive methods for mapping small semantic parsing graphs into low-dimensional vectors (whole-graph granularity), reflecting their structural and semantic similarities [116]. For these reasons, we find the direction of evaluating the LP on semantic parsing graphs compelling and still never-explored.…”
Section: Link Prediction On Semantic Parsing Graphsmentioning
confidence: 98%
“…For instance, having dense formal representations is becoming more and more indispensable to avoid hallucinations in conversational agents and produce more factual and semantically coherent summaries. Notably, authors have devised unsupervised and inductive methods for mapping small semantic parsing graphs into low-dimensional vectors (whole-graph granularity), reflecting their structural and semantic similarities [116]. For these reasons, we find the direction of evaluating the LP on semantic parsing graphs compelling and still never-explored.…”
Section: Link Prediction On Semantic Parsing Graphsmentioning
confidence: 98%
“…Future works should explore memory writing/reading operations with structured information extracted from text, comparing unsupervised techniques for document metadata acquisition (e.g., classes [ 49 , 50 ] and entity relationships [ 51 , 52 ]) with advanced semantic parsing solutions such as event extraction [ 53 , 54 ] and abstract meaning representation, which was recently used for knowledge injection into deep neural networks [ 55 , 56 ]. The community should envisage novel graph representation learning methods [ 57 , 58 , 59 , 60 ] to densely represent multi-relational structured data following a Linked Open Data vision centered on the integration of several source knowledge graphs or relational databases via automatic entity matching [ 61 ]. Taking inspiration from biology [ 62 , 63 ] and communication networks [ 64 , 65 , 66 , 67 ], we underline the importance of managing dynamic scenarios, tracking knowledge refinements among sentences, and propagating information, which is pivotal when processing lengthy inputs.…”
Section: Limitations and Future Directionsmentioning
confidence: 99%
“…The loss function for all relation description statement sets is defined as follows: (11) where 𝛾 is an interval parameter greater than 0, and T represents the set of text description statements.…”
Section: Representation Learning Of Relation Descriptionmentioning
confidence: 99%
“…Similar to the idea of unsupervised semantic-aware-based graph representation learning [11], text information can provide rich semantic resources for graph representation and play an important auxiliary role in the representation learning optimization of a graph representation model. Therefore, recent research on KG representation learning research focuses on how to use the internal information of a knowledge graph to optimize the representation of knowledge.…”
Section: Introductionmentioning
confidence: 99%