2020 International Joint Conference on Neural Networks (IJCNN) 2020
DOI: 10.1109/ijcnn48605.2020.9207015
|View full text |Cite
|
Sign up to set email alerts
|

DRG2vec: Learning Word Representations from Definition Relational Graph

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(17 citation statements)
references
References 26 publications
0
17
0
Order By: Relevance
“…In this study, we maintain the mask ratio of 15, 25, and 15% for AttrMask, GraphMAE, and MAM, respectively, as per their original experimental settings. 10,11,13 Since MAM is designed to tackle the problem of imbalanced atom distribution, we test our method only with AttrMask and GraphMAE in this study. It is essential to note that pretraining tasks with our proposed masking strategy retain the same settings as that of the previous works 10,13 and will be denoted as AttrMask(WMM) or GraphMAE(WMM) for clarity.…”
Section: Analysis Of Moleculesmentioning
confidence: 99%
See 3 more Smart Citations
“…In this study, we maintain the mask ratio of 15, 25, and 15% for AttrMask, GraphMAE, and MAM, respectively, as per their original experimental settings. 10,11,13 Since MAM is designed to tackle the problem of imbalanced atom distribution, we test our method only with AttrMask and GraphMAE in this study. It is essential to note that pretraining tasks with our proposed masking strategy retain the same settings as that of the previous works 10,13 and will be denoted as AttrMask(WMM) or GraphMAE(WMM) for clarity.…”
Section: Analysis Of Moleculesmentioning
confidence: 99%
“…10,11,13 Since MAM is designed to tackle the problem of imbalanced atom distribution, we test our method only with AttrMask and GraphMAE in this study. It is essential to note that pretraining tasks with our proposed masking strategy retain the same settings as that of the previous works 10,13 and will be denoted as AttrMask(WMM) or GraphMAE(WMM) for clarity. However, the pretraining methods with random masking will not be explicitly annotated in the subsequent sections.…”
Section: Analysis Of Moleculesmentioning
confidence: 99%
See 2 more Smart Citations
“…In fact, learned feature mapping (via deep neural network for non-tabular and structured data) allows for learning (namely generating) features customized for the problem at hand potentially reducing the number of features to the minimum necessary number. The difference, with respect to the case of learned feature mapping for tabular data, is that in the case of non-tabular data, the approaches for learned feature mapping are becoming quite effective and efficient thanks to the availability of huge amount of data that allows to pretrain model in one application that can be applied and fine-tuned to other problems [ 160 – 162 ]. These pretrained models map complex data structures into reasonably small cardinality yet quite expressive features that can be reused efficiently and effectively in many applications.…”
Section: Tip 8: Appropriately Feature Engineer Non-tabular Datamentioning
confidence: 99%