2017
DOI: 10.48550/arxiv.1708.00185
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Tensorial Recurrent Neural Networks for Longitudinal Data Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 4 publications
0
6
0
Order By: Relevance
“…However, we can also use other constraints instead of (9) and solve the optimization problem in (7), ( 8) and (9) in the same manner. As an example, a common choice of constraint for neural networks is the Frobenius norm [24], i.e., defined as…”
Section: A Anomaly Detection With the Oc-svm Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…However, we can also use other constraints instead of (9) and solve the optimization problem in (7), ( 8) and (9) in the same manner. As an example, a common choice of constraint for neural networks is the Frobenius norm [24], i.e., defined as…”
Section: A Anomaly Detection With the Oc-svm Algorithmmentioning
confidence: 99%
“…Remark 7. For the SVDD case, we update W (•) at the k th iteration as in (24). However, instead of (25), we have the following definition for G…”
Section: B Anomaly Detection With the Svdd Algorithmmentioning
confidence: 99%
“…Compared with aforementioned explicit structure changes, the low-rank method is one orthogonal approach to implicitly prune the dense connections. Low-rank tensor methods have been successfully applied to address the redundant dense connection problem in CNNs [28,47,1,38,18]. Since the key operation in one perception is W • x, Sainath et al [31] decompose W with Singular Value Decomposition (SVD), reducing up to 30% parameters in W, but also demonstrates up to 10% accuracy loss [46].…”
Section: Related Workmentioning
confidence: 99%
“…In this work, we propose to design a sparsely connected tensor representation, i.e., the Block-Term decomposition (BTD) [7], to replace the redundant and densely connected operation in LSTM 1 . The Block-Term decomposition is a low-rank approximation method that decomposes a highorder tensor into a sum of multiple Tucker decomposition models [39,44,45,21].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation