2015
DOI: 10.1007/s10489-015-0715-5
|View full text |Cite
|
Sign up to set email alerts
|

A highly scalable modular bottleneck neural network for image dimensionality reduction and image transformation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…At present, most dimensionality reduction algorithms deal with vector data, and some dimensionality reduction algorithms deal with high-order tensor data. The reason why the reduced dimension data representation is used is that the original high-dimensional space contains redundant information and noise information, which causes errors and reduces the accuracy in practical applications, such as image recognition [8]; by dimensionality reduction, we hope to reduce the error caused by redundant information and improve the accuracy of recognition.…”
Section: Related Workmentioning
confidence: 99%
“…At present, most dimensionality reduction algorithms deal with vector data, and some dimensionality reduction algorithms deal with high-order tensor data. The reason why the reduced dimension data representation is used is that the original high-dimensional space contains redundant information and noise information, which causes errors and reduces the accuracy in practical applications, such as image recognition [8]; by dimensionality reduction, we hope to reduce the error caused by redundant information and improve the accuracy of recognition.…”
Section: Related Workmentioning
confidence: 99%
“…Because of that, we are gradually faced during the convergence process with extremely large PMs and the computation of many iterations over high-dimensional data becomes subsequently intractable. For this reason, our previous work was limited either in polynomial parameters range (Carcenac and Redif 2016) or in number of iterations (Carcenac et al 2017).…”
Section: Introductionmentioning
confidence: 99%