2018
DOI: 10.1016/j.neucom.2017.05.103
|View full text |Cite
|
Sign up to set email alerts
|

Improving deep neural networks with multi-layer maxout networks and a novel initialization method

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
57
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 102 publications
(57 citation statements)
references
References 10 publications
0
57
0
Order By: Relevance
“…Sun et al proposed Multi-layer Maxout Networks (MMN) with multi-layer which can train active function, and deduced a new initialization method dedicated to the activation of MMN. The method can reduce the movement of internal covariates when the signal propagates through the layer [ 6 ]. A. Pacheco determined the input weights and bias for the Extreme Learning Machine (ELM) by using the Restricted Boltzmann Machine (RBM), named as RBM-ELM [ 4 ].…”
Section: Introductionmentioning
confidence: 99%
“…Sun et al proposed Multi-layer Maxout Networks (MMN) with multi-layer which can train active function, and deduced a new initialization method dedicated to the activation of MMN. The method can reduce the movement of internal covariates when the signal propagates through the layer [ 6 ]. A. Pacheco determined the input weights and bias for the Extreme Learning Machine (ELM) by using the Restricted Boltzmann Machine (RBM), named as RBM-ELM [ 4 ].…”
Section: Introductionmentioning
confidence: 99%
“…After extracting the features from three modalities, the feature fusion process is carried out using the proposed EDHO algorithm, which is derived by the integration of EHO 27 and DHOA, 28 respectively. Finally, the process of recognition is carried out using Deep Maxout network 29 . Figure 1 portrays the schematic view of the proposed EDHO‐based hybrid fusion model.…”
Section: Proposed Elephant Deer Hunting Optimization‐based Hybrid Fusion Modelmentioning
confidence: 99%
“…Two interesting and successful approaches are Maxout [Goodfellow et al, 2013, Sun et al, 2018 and NIN [Lin et al, 2013]. However, despite the good performances, both approaches move away from the concept of trainable activation function as it has been previously discussed insofar as the adaptable function does not correspond to the neuron activation function by which the neuron output is computed on the basis of a scalar value (the neuron input) according to the standard two-stage process.…”
Section: Other Approachesmentioning
confidence: 99%