2023
DOI: 10.1109/access.2023.3240784
|View full text |Cite
|
Sign up to set email alerts
|

TT-MLP: Tensor Train Decomposition on Deep MLPs

Abstract: Deep multilayer perceptrons (MLPs) have achieved promising performance on computer vision tasks. Deep MLPs consist solely of fully-connected layers as the conventional MLPs do but adopt more sophisticated network architectures based on mixer layers composed of token-mixing and channelmixing components. These architectures enable deep MLPs to have global receptive fields, but the significant increase of parameters becomes a massive burden on practical applications. To tackle this problem, we focus on using tens… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…Zongzhao Qiu [16] achieved a 33% reduction in mean squared error (MSE) by stacking GMLP and ReZero to learn sequential drug-target affinity information. Jiale Yan [17] introduced TT-MLP using tensor decomposition to compress deep MLPs, thus reducing the amount of training parameters. Wenjing Zhu [18] utilized GMLP components and the global attention module (Glam) to achieve good results.…”
Section: Introductionmentioning
confidence: 99%
“…Zongzhao Qiu [16] achieved a 33% reduction in mean squared error (MSE) by stacking GMLP and ReZero to learn sequential drug-target affinity information. Jiale Yan [17] introduced TT-MLP using tensor decomposition to compress deep MLPs, thus reducing the amount of training parameters. Wenjing Zhu [18] utilized GMLP components and the global attention module (Glam) to achieve good results.…”
Section: Introductionmentioning
confidence: 99%