2020 10th International Conference on Advanced Computer Information Technologies (ACIT) 2020
DOI: 10.1109/acit49673.2020.9208945
|View full text |Cite
|
Sign up to set email alerts
|

Automated Object Recognition System based on Convolutional Autoencoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 11 publications
0
11
0
Order By: Relevance
“…A similar approach to the previous section can be demonstrated with an unsupervised neural network autoencoder model that reduces the number of parameters by compressing the observable data space into a lower dimensional representation while unsupervised training process is aimed at improving the accuracy of regeneration form the compressed representation to the observable space. Models of a similar type were used to create structured unsupervised representations of different data types via unsupervised autoencoder training with minimization of generative error [8,11].…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…A similar approach to the previous section can be demonstrated with an unsupervised neural network autoencoder model that reduces the number of parameters by compressing the observable data space into a lower dimensional representation while unsupervised training process is aimed at improving the accuracy of regeneration form the compressed representation to the observable space. Models of a similar type were used to create structured unsupervised representations of different data types via unsupervised autoencoder training with minimization of generative error [8,11].…”
Section: Resultsmentioning
confidence: 99%
“…A deep neural network autoencoder (method 3) produces a non-linear dimensionality reduction of the observable data to the lower-dimensional representation with the most informative features [8]. The structure of the deep neural network model used in this work is described in detail in [11]. The diagram of the architecture of the unsupervised autoencoder model is given in Fig.1.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Disentangled representations were produced and discussed in [13] with deep variational autoencoder architecture and different types of visual data, pointing at the possibility of a general nature of the effect. Concept-associated structure in artificial neural networks was observed in the representations of generative models with different types of real-world data as Internet traffic [14], anomaly detection [15], medical and aerial surveillance imaging [16], [17], linguistics [18] and a range of generative architectures [19], [20], pointing to a possibility of a deep connection between unsupervised generative learning and characteristic structures in the latent representations of learning models that can be interpreted as general concepts in the observable data.…”
Section: A Related Workmentioning
confidence: 99%
“…On the other hand, methods of unsupervised machine learning [9,10] have shown an effective ability to achieve significant reduction of dimensionality, or redundancy of the observable parameter space that in a number of cases were instrumental in the analysis and determination of characteristic patterns and trends in complex data [11][12][13] including constrained data [14]. Importantly, application of these methods does not require data labeled with confidently known outcome and generally can be performed with smaller samples of data.…”
Section: Introductionmentioning
confidence: 99%