2020
DOI: 10.48550/arxiv.2012.03550
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SGD_Tucker: A Novel Stochastic Optimization Strategy for Parallel Sparse Tucker Decomposition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 28 publications
0
1
0
Order By: Relevance
“…Kaya and Ucar [28] provide parallel algorithms for sparse Tucker decompositions. Li et al [32] introduce SGD-Tucker, which uses stochastic gradient descent to perform Tucker decomposition of sparse tensors.…”
Section: Previous Workmentioning
confidence: 99%
“…Kaya and Ucar [28] provide parallel algorithms for sparse Tucker decompositions. Li et al [32] introduce SGD-Tucker, which uses stochastic gradient descent to perform Tucker decomposition of sparse tensors.…”
Section: Previous Workmentioning
confidence: 99%