2023
DOI: 10.1007/s00521-023-08900-7
|View full text |Cite
|
Sign up to set email alerts
|

Online cross-layer knowledge distillation on graph neural networks with deep supervision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 34 publications
0
1
0
Order By: Relevance
“…By adopting this strategy, we optimize efficiency, allowing particles to identify the optimal global position without imposing excessive computational demands 59 . Traditional compression methods for neural network models often raise concerns about their potential negative impact on model robustness 3 , 14 , 15 , 17 , 19 . In our approach, we avoid directly reducing or compressing the model, as such actions have been associated with detrimental effects on model parameters.…”
Section: Related Workmentioning
confidence: 99%
“…By adopting this strategy, we optimize efficiency, allowing particles to identify the optimal global position without imposing excessive computational demands 59 . Traditional compression methods for neural network models often raise concerns about their potential negative impact on model robustness 3 , 14 , 15 , 17 , 19 . In our approach, we avoid directly reducing or compressing the model, as such actions have been associated with detrimental effects on model parameters.…”
Section: Related Workmentioning
confidence: 99%