Representation Learning for Natural Language Processing 2023
DOI: 10.1007/978-981-99-1600-9_6
|View full text |Cite
|
Sign up to set email alerts
|

Graph Representation Learning

Cheng Yang,
Yankai Lin,
Zhiyuan Liu
et al.

Abstract: Graph structure, which can represent objects and their relationships, is ubiquitous in big data including natural languages. Besides original text as a sequence of word tokens, massive additional information in NLP is in the graph structure, such as syntactic relations between words in a sentence, hyperlink relations between documents, and semantic relations between entities. Hence, it is critical for NLP to encode these graph data with graph representation learning. Graph representation learning, also known a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 82 publications
0
2
0
Order By: Relevance
“…Future work could experiment with alternating GNN and Transformer layers 69 or other more complex and advanced architectures. 70,71 We might even explore constructing dynamic graphs for genes to incorporate temporal information. Lastly, to address the issues of data scarcity and imbalance met in downstream tasks, beyond using pre-training and rich biological features, semi-supervised learning [72][73][74] and long-tail learning 75,76 may offer more solutions.…”
Section: Discussionmentioning
confidence: 99%
“…Future work could experiment with alternating GNN and Transformer layers 69 or other more complex and advanced architectures. 70,71 We might even explore constructing dynamic graphs for genes to incorporate temporal information. Lastly, to address the issues of data scarcity and imbalance met in downstream tasks, beyond using pre-training and rich biological features, semi-supervised learning [72][73][74] and long-tail learning 75,76 may offer more solutions.…”
Section: Discussionmentioning
confidence: 99%
“…Large Language Models (LLMs) have demonstrated significant success across various domains [4,8,12,16,18,19], largely attributed to their extensive knowledge memorized during the pretraining phase and their exceptional ability to generalize during the fine-tuning process on diverse textual datasets [2,7,9,13,14]. This success has spurred interest in combining graph neural networks (GNNs) with LLMs to enhance their capabilities in understanding and modeling graphs [6,11,15,21,23], including implementing LLMs as encoders to process features within GNNs [3,5,10,22], and employing LLMs as aligners with GNNs to enhance performance [1,20,24].…”
Section: Introductionmentioning
confidence: 99%