2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS) 2017
DOI: 10.1109/ipdps.2017.80
|View full text |Cite
|
Sign up to set email alerts
|

Model-Driven Sparse CP Decomposition for Higher-Order Tensors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
32
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 52 publications
(32 citation statements)
references
References 19 publications
0
32
0
Order By: Relevance
“…Recently, there has been growing interest in scaling tensor operations to bigger data and more processors in both the data mining/machine learning and the high performance computing communities. For sparse tensors, there have been parallelization efforts to compute CP decompositions both on shared-memory platforms [22], [23] as well as distributedmemory platforms [24]- [26], and these approaches can be generalized to constrained problems [13]. The focus of this work is on dense tensors, but many of the ideas for sparse tensors are applicable to the dense case, including parallel data distributions, communication pattern, and techniques to avoid recomputation across modes.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, there has been growing interest in scaling tensor operations to bigger data and more processors in both the data mining/machine learning and the high performance computing communities. For sparse tensors, there have been parallelization efforts to compute CP decompositions both on shared-memory platforms [22], [23] as well as distributedmemory platforms [24]- [26], and these approaches can be generalized to constrained problems [13]. The focus of this work is on dense tensors, but many of the ideas for sparse tensors are applicable to the dense case, including parallel data distributions, communication pattern, and techniques to avoid recomputation across modes.…”
Section: Related Workmentioning
confidence: 99%
“…The idea of using dimension trees (discussed in section IV-A) to avoid recomputation within MTTKRPs across modes is introduced in [4] for computing the CP decomposition of dense tensors. It has also been used for sparse CP [23], [26] and other tensor computations [24].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Most of the available tensor analysis so ware packages [7,26] are wri en in Matlab, yielding limitations on performance and utility of multicore and other high-performance architectures. While there have been many recent developments in e cient soware for sparse tensor decompositions [15,22], there remain few options in the case of dense tensors, which is the subject of this work. Our motivating application is a neuroimaging data analysis problem involving functional MRI (fMRI) data.…”
Section: Introductionmentioning
confidence: 99%
“…Building on the CSF data structure, Li at al. proposed an adaptive tensor memoization algorithm that reduces the number of redundant floating-point operations that occur during the sequence of MTTKRP calculations required by CP-ALS, with the trade-off of increased memory usage [18] due to storing semi-sparse intermediate tensors. An adaptive model tuning framework called AdaTM was also developed that chooses an optimized memoization algorithm based on the sparse input tensor.…”
mentioning
confidence: 99%