2020
DOI: 10.3390/s20216378
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Framework Using Deep Auto-Encoders Based Linear Model for Data Classification

Abstract: This paper proposes a novel data classification framework, combining sparse auto-encoders (SAEs) and a post-processing system consisting of a linear system model relying on Particle Swarm Optimization (PSO) algorithm. All the sensitive and high-level features are extracted by using the first auto-encoder which is wired to the second auto-encoder, followed by a Softmax function layer to classify the extracted features obtained from the second layer. The two auto-encoders and the Softmax classifier are stacked i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(4 citation statements)
references
References 49 publications
0
4
0
Order By: Relevance
“…These are total accuracy, positive interpretation power, negative interpretation power, sensitivity, F-rating and specificity. When calculating these values, DP: true-positive, DN: true-negative, YP: falsepositive and YN: false-negative FN: false-negative is used [16][17][18]. The true-positive expression refers to the number of operations in the sample space where the work carried out in the dissertation project, in which the model also contains smoke for visual data that actually contains smoke.…”
Section: Resultsmentioning
confidence: 99%
“…These are total accuracy, positive interpretation power, negative interpretation power, sensitivity, F-rating and specificity. When calculating these values, DP: true-positive, DN: true-negative, YP: falsepositive and YN: false-negative FN: false-negative is used [16][17][18]. The true-positive expression refers to the number of operations in the sample space where the work carried out in the dissertation project, in which the model also contains smoke for visual data that actually contains smoke.…”
Section: Resultsmentioning
confidence: 99%
“…The most often utilised attributes for recognising targets are texture and colour moment, which are both global features. In [68], an SVM classifier produced SE = 93 and SP = 92 after they retrieved colour, texture, and form variables. To perform well on a dataset of 217 melanomas and 588 photos, [69,70] used colour moment characteristics and texture features.…”
Section: Discussionmentioning
confidence: 99%
“…Self-encoder networks consist of input, hidden, and output layers, which are not essentially different from neural networks [10] . The contextual self-encoder is an improvement based on the self-encoder, which combines contextual information consistency and image structure unity, providing an advantage in repairing occluded regions [11] .…”
Section: Algorithm Principle and Implementationmentioning
confidence: 99%