2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) 2019
DOI: 10.1109/iccad45719.2019.8942121
|View full text |Cite
|
Sign up to set email alerts
|

Tensor Methods for Generating Compact Uncertainty Quantification and Deep Learning Models

Abstract: Tensor methods have become a promising tool to solve high-dimensional problems in the big data era. By exploiting possible low-rank tensor factorization, many high-dimensional model-based or data-driven problems can be solved to facilitate decision making or machine learning. In this paper, we summarize the recent applications of tensor computation in obtaining compact models for uncertainty quantification and deep learning. In uncertainty analysis where obtaining data samples is expensive, we show how tensor … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 58 publications
(60 reference statements)
0
4
0
Order By: Relevance
“…To complete the setup of Bayesian model (15), we still need to specify the prior of rank-control hyper parameters Λ = λ (for CP) or Λ = {λ (n) } d n=1 (for Tucker, TT and TTM). Note that small elements in λ and λ (n) leads to rank reductions in the tensor models, therefore we choose two hyper-prior densities that place high probability near zero.…”
Section: Rank-shrinking Hyper-parameter Priorsmentioning
confidence: 99%
See 2 more Smart Citations
“…To complete the setup of Bayesian model (15), we still need to specify the prior of rank-control hyper parameters Λ = λ (for CP) or Λ = {λ (n) } d n=1 (for Tucker, TT and TTM). Note that small elements in λ and λ (n) leads to rank reductions in the tensor models, therefore we choose two hyper-prior densities that place high probability near zero.…”
Section: Rank-shrinking Hyper-parameter Priorsmentioning
confidence: 99%
“…Next we discuss how to estimate the resulting posterior density (15). We develop an approach based on stochastic variational inference (SVI) [27] that is compatible with the large-scale stochastic optimization required to train large neural networks.…”
Section: Parameter Inferencementioning
confidence: 99%
See 1 more Smart Citation
“…For k, we choose the trilinar kernel [47] for SiamEvent as it shows the best performance based on the experiments. As processing 4D tensor, e.g., 3D convolution, needs high computational cost and may degrade the test speed [48], we then reshape S ± to a 3D tensor S by stacking the voxel grids of two polarities. In such a way, the tensor can be fed to our proposed framework to track an arbitrary target.…”
Section: A Event Representationmentioning
confidence: 99%