2020
DOI: 10.48550/arxiv.2009.08925
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Infinity Mirror Test for Graph Models

Abstract: Graph models, like other machine learning models, have implicit and explicit biases built-in, which often impact performance in nontrivial ways. The model's faithfulness is often measured by comparing the newly generated graph against the source graph using any number or combination of graph properties. Differences in the size or topology of the generated graph therefore indicate a loss in the model. Yet, in many systems, errors encoded in loss functions are subtle and not well understood. In the present work,… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 27 publications
0
1
0
Order By: Relevance
“…GNNs offer excellent modeling capacity in downstream machine learning tasks but can be difficult to train, have enormous parameter spaces, and do not readily permit human inspection. Despite performing well on link prediction and node classification tasks, some GNN architectures fail to preserve the topological fidelity of the input graphs [10]. AVRGs, on the other hand, can capture and model the subtle intricacies in both topological and attribute spaces without requiring supervision, deep neural architectures, and training.…”
Section: Introductionmentioning
confidence: 99%
“…GNNs offer excellent modeling capacity in downstream machine learning tasks but can be difficult to train, have enormous parameter spaces, and do not readily permit human inspection. Despite performing well on link prediction and node classification tasks, some GNN architectures fail to preserve the topological fidelity of the input graphs [10]. AVRGs, on the other hand, can capture and model the subtle intricacies in both topological and attribute spaces without requiring supervision, deep neural architectures, and training.…”
Section: Introductionmentioning
confidence: 99%