2017
DOI: 10.1021/acs.jctc.6b00853
|View full text |Cite
|
Sign up to set email alerts
|

A General Sparse Tensor Framework for Electronic Structure Theory

Abstract: Linear-scaling algorithms must be developed in order to extend the domain of applicability of electronic structure theory to molecules of any desired size. However, the increasing complexity of modern linear-scaling methods makes code development and maintenance a significant challenge. A major contributor to this difficulty is the lack of robust software abstractions for handling block-sparse tensor operations. We therefore report the development of a highly efficient symbolic block-sparse tensor library in o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 41 publications
0
10
0
Order By: Relevance
“…The conventional approaches first extract dense block-pairs of the two input tensors, and then perform multiplication by calling dense BLAS linear algebra. Finally, those approaches pre-allocate the output tensor using domain knowledge or a symbolic phase approach [20,25,53,54,67], such as TiledArray [54], Cyclops Tensor Framework [36], and libtensor [14,49]. The state-of-the-art work Sparta focuses on element-wise sparse tensor contractions [44], solving the high dimensionality challenges through hash tablebased approaches and addressing the unknown output tensor and irregular memory access challenges by dynamic allocation, permutation and sorting.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The conventional approaches first extract dense block-pairs of the two input tensors, and then perform multiplication by calling dense BLAS linear algebra. Finally, those approaches pre-allocate the output tensor using domain knowledge or a symbolic phase approach [20,25,53,54,67], such as TiledArray [54], Cyclops Tensor Framework [36], and libtensor [14,49]. The state-of-the-art work Sparta focuses on element-wise sparse tensor contractions [44], solving the high dimensionality challenges through hash tablebased approaches and addressing the unknown output tensor and irregular memory access challenges by dynamic allocation, permutation and sorting.…”
Section: Related Workmentioning
confidence: 99%
“…High-order sparse tensors have been studied well in tensor decomposition on various hardware platforms [7, 26, 37-40, 43, 51, 52, 55, 69-71] with a focus on the product of a sparse tensor and a dense matrix or vector. The two sparse tensor contraction (SpTC) has been studied well [14,20,25,36,44,49,53,54,54,67] where block-wise sparsity is the main focus. As the needs of element-/pair-wise sparsity emerge in applications from chemistry, physics and deep learning [4,17,31,41,62,63], the recent work [44] studied element-wise SpTC.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Our work on graph optimization builds on substantial efforts for optimization of computational graphs of tensor operations. Tensor contraction can be optimized via parallelization [22,23,41,49], efficient transposition [51], blocking [10,18,28,43], exploiting symmetry [15,48,49], and sparsity [22,24,32,39,39,47]. For complicated tensor graphs, specialized compilers like XLA [52] and TVM [8] rewrite the computational graph to optimize program execution and memory allocation on dedicated hardware.…”
Section: Previous Workmentioning
confidence: 99%
“…Lewis et al introduced a clustered low-rank tensor format to exploit element and rank sparsities [81]. Block sparsity has been utilized in coupled-cluster singles and doubles (CCSD) in the work [15,37,64,91,107]. Scaled opposite spin second order Mller-Plesset perturbation theory (SOS-MP2) method uses tensor hypercontraction (T ), approximating a electron Coulomb repulsion integrals (ERI) tensor by decomposing into lower order tensors, with sparsity [133].…”
Section: 24mentioning
confidence: 99%