2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.71
|View full text |Cite
|
Sign up to set email alerts
|

Factorized Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
108
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 168 publications
(108 citation statements)
references
References 7 publications
0
108
0
Order By: Relevance
“…To speed up the computation and reduce the model size, Gonda et al [19] proposed a novel strategy via replacing 3D convolution layers with pairs of 2D and 1D convolution layers. Numerous other attempts on depthwise separable convolutions have been made in various fashions to improve the efficiency of convolution [20][21][22]. Zhang et al [23] combined depthwise separable convolution and spatial separable convolution for liver tumour segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…To speed up the computation and reduce the model size, Gonda et al [19] proposed a novel strategy via replacing 3D convolution layers with pairs of 2D and 1D convolution layers. Numerous other attempts on depthwise separable convolutions have been made in various fashions to improve the efficiency of convolution [20][21][22]. Zhang et al [23] combined depthwise separable convolution and spatial separable convolution for liver tumour segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…Flattened networks [17] construct a network out of fully factorized convolutions and presented the potential of exceptionally factorized networks. Factorized Networks [18] presents a similar factorized convolution in addition to use of topological networks. Successively, the Xception network [19] [21] and deep fried convnets [22].…”
Section: Mobilenet Architecturementioning
confidence: 99%
“…Numerous methods have been proposed to reduce the computational demands of a deep model by trading prediction accuracy for runtime, via compressing a pre-trained network [6,15,21,24,36,42,47], training small networks directly [11,37], or a combination of both [19]. Using these approaches, a user now needs to decide when to use a specific model, in order to meet the prediction accuracy requirement with minimal latency.…”
Section: Related Workmentioning
confidence: 99%