2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2017
DOI: 10.1109/cvprw.2017.46
|View full text |Cite
|
Sign up to set email alerts
|

Sparse, Quantized, Full Frame CNN for Low Power Embedded Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 28 publications
(21 citation statements)
references
References 5 publications
0
21
0
Order By: Relevance
“…The function of the backbone encoder is to process an input image and extract rich abstract features from the input image that represent the crucial information in the image. Instead of adopting very deep and wide architectures such as AlexNet [19], GoogleNet [20], DenseNet [21], ResNet101 [22] and VGG16 [23] that comprise numerous parameters and incurs higher computation cost, an open source lightweight architecture named JacintoNet [24] which is designed for embedded devices is adopted in the proposed method. The JacintoNet is a modified ResNet-10 by removing the shortcut connection.…”
Section: A Backbone Encodermentioning
confidence: 99%
“…The function of the backbone encoder is to process an input image and extract rich abstract features from the input image that represent the crucial information in the image. Instead of adopting very deep and wide architectures such as AlexNet [19], GoogleNet [20], DenseNet [21], ResNet101 [22] and VGG16 [23] that comprise numerous parameters and incurs higher computation cost, an open source lightweight architecture named JacintoNet [24] which is designed for embedded devices is adopted in the proposed method. The JacintoNet is a modified ResNet-10 by removing the shortcut connection.…”
Section: A Backbone Encodermentioning
confidence: 99%
“…Sparse representations get more efficient when supported by proper hardware. Examples for integrated accelerators are in the Texas Instruments TDAx processor [18] family or in the custom ASIC described in [19]. Unfortunately, low power budgets prevent the use of such components on MCUs.…”
Section: A Pruningmentioning
confidence: 99%
“…There have been works which apply these techniques for semantic segmentation [14]. However in [14], the focus was on a coarse segmentation on only a few classes, while we are attempting to build efficient models for the Cityscapes benchmark [3] with all the classes. Newer approaches to model compression, involves de-signing efficient CNN Modules.…”
Section: Model Compressionmentioning
confidence: 99%
“…Grouped convolution is another way of building structured sparse convolutions. Such a convolution with groups parameter g, decreases the parameter and FLOPs of the layer by a factor g. Grouped convolutions also help in data bandwidth reduction [14], was first implemented by AlexNet [11]. It has been used for efficient layer design in ResNext [21].…”
Section: Grouped Convolutionsmentioning
confidence: 99%