2018
DOI: 10.48550/arxiv.1801.06434
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

EffNet: An Efficient Structure for Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…For that, there are two possible categories to work on: (1) Classification NN that should be modified to perform regression and re-trained [28] and (2) Regression NNs [29]. Although the classification accuracy achieved by the one-sided Jacobi TSVM could be obtained by an existing NN model from the two above mentioned categories, the main concern remains in the computational Even the smallest models such as MobileNet [30], Shuffle Net [31], and EffNet [32] contains at least four layers. On the other hand, for the regression NNs, with only one hidden layer, a shallow network is considered to be the smallest possible regression NN model.…”
Section: A Network Structurementioning
confidence: 99%
“…For that, there are two possible categories to work on: (1) Classification NN that should be modified to perform regression and re-trained [28] and (2) Regression NNs [29]. Although the classification accuracy achieved by the one-sided Jacobi TSVM could be obtained by an existing NN model from the two above mentioned categories, the main concern remains in the computational Even the smallest models such as MobileNet [30], Shuffle Net [31], and EffNet [32] contains at least four layers. On the other hand, for the regression NNs, with only one hidden layer, a shallow network is considered to be the smallest possible regression NN model.…”
Section: A Network Structurementioning
confidence: 99%
“…Since most of the computation in deep networks is due to convolutions, it is logical to focus on reducing the computational burden due to them. Howard et al (2017); Zhang et al (2017); Freeman et al (2018) employ variations of convolutional layers by taking advantage of the separability of the kernels either by convolving with a group of input channels or by splitting the kernels across the dimensions. In addition to depth-wise convolutions, MobileNetV2 (Sandler et al, 2018) uses inverted residual structures to learn how to combine the inputs in a residual block.…”
Section: Related Workmentioning
confidence: 99%

Self-Binarizing Networks

Lahoud,
Achanta,
Márquez-Neila
et al. 2019
Preprint
“…Yet another variation of MobileNet building block is the EffNet architecture [34], motivated by a careful study. EffNet utilizes depthwise separable convolutions, and pushes it even further by splitting 3£3 convolution into pair of convolutions with 1£3 and 3£1 kernels.…”
Section: Speeding-up Convolutional Neural Networkmentioning
confidence: 99%