2022 International Conference on Artificial Intelligence in Everything (AIE) 2022
DOI: 10.1109/aie57029.2022.00128
|View full text |Cite
|
Sign up to set email alerts
|

Research on knowledge representation learning method of diet knowledge graph

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…Table 1 shows the statistics of these datasets. Furthermore, we compare our Mix-Key method with a vanilla neural network, three topology-based augmentation methods (PermE [ 37 ], MaskN [ 38 ] and NodeSam [ 29 ]), two mixup-based augmentation methods (MixupGraph [ 27 ] and Graph Transplant [ 30 ]) and four graph contrastive learning methods (AutoGCL [ 39 ], GraphMVP [ 40 ], MolCLR [ 9 ] and KANO [ 41 ]).…”
Section: Methodsmentioning
confidence: 99%
“…Table 1 shows the statistics of these datasets. Furthermore, we compare our Mix-Key method with a vanilla neural network, three topology-based augmentation methods (PermE [ 37 ], MaskN [ 38 ] and NodeSam [ 29 ]), two mixup-based augmentation methods (MixupGraph [ 27 ] and Graph Transplant [ 30 ]) and four graph contrastive learning methods (AutoGCL [ 39 ], GraphMVP [ 40 ], MolCLR [ 9 ] and KANO [ 41 ]).…”
Section: Methodsmentioning
confidence: 99%
“…Lack of canonical node features Traditional network biology studies solely rely on biological network structures to gain insights [7,6,49]. Meanwhile, the presence of rich node features in many existing graph benchmarks is crucial for the success of GNNs [55], posing a challenge for GNNs in learning without meaningful node features. An exciting and promising future direction for obtaining meaningful node features is by leveraging the sequential or structural information of the gene product (e.g., protein) using large-scale biological pre-trained language models like ESM-2 [53].…”
Section: Challenges For the Obnb Benchmarks And Potential Future Dire...mentioning
confidence: 99%
“…OneHotLogDeg (short for LogDeg) first computes the log degree of each node in the graph and then uniformly bins the nodes into one of 32 bins based on their log degree. The one-hot encoded node degrees approach has recently been shown to be a great structure encoder, whose utilization can sometimes result in performance superior to using the original node features associated with the graph [17,55]. Meanwhile, the design choice of using log-uniform grids stems from the scale-free nature of biological networks [2].…”
Section: A21 Node Feature Designmentioning
confidence: 99%
“…The activity and property of a drug are closely related to the structure of the drug molecule. Nevertheless, most current self-supervised models do not use 3D information or use it partially ( Liu et al 2022a , Stärk et al 2022 ). We introduce a novel 3D–3D view contrastive learning method to learn molecular structural-semantic.…”
Section: Introductionmentioning
confidence: 99%