Proceedings of the International Conference on Computer-Aided Design 2018
DOI: 10.1145/3240765.3240845
|View full text |Cite
|
Sign up to set email alerts
|

Scalable-effort ConvNets for multilevel classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…However, all objects belonging to the same category may not always have the same textures or colors, as shown in Modular Neural Networks for Low-Power Image Classification on Embedded Devices 1:7 more closely account for visual similarities to obtain significantly higher accuracy in classification without using textures and colors. [51][52][53][54] generate hierarchies using semantic similarity between categories or use pre-defined semantic hierarchies like WordNet [61]. WordNet is a large lexical database of the English language where nouns, verbs, adjectives, and adverbs are linked to one another using conceptual-semantic and lexical relations.…”
Section: Mnn-tree (Proposed Method)mentioning
confidence: 99%
See 1 more Smart Citation
“…However, all objects belonging to the same category may not always have the same textures or colors, as shown in Modular Neural Networks for Low-Power Image Classification on Embedded Devices 1:7 more closely account for visual similarities to obtain significantly higher accuracy in classification without using textures and colors. [51][52][53][54] generate hierarchies using semantic similarity between categories or use pre-defined semantic hierarchies like WordNet [61]. WordNet is a large lexical database of the English language where nouns, verbs, adjectives, and adverbs are linked to one another using conceptual-semantic and lexical relations.…”
Section: Mnn-tree (Proposed Method)mentioning
confidence: 99%
“…Images in the same category may have different colors and textures. Semantic similarity [51][52][53][54] Use semantic information from sources like WordNet to quantify the similarity.…”
Section: Techniquesmentioning
confidence: 99%
“…By performing pruning, quantization, and encoding, Deep Compression [4] reduces the model size by 95%. Pathlevel pruning is also seen in tree-based hierarchical DNNs [29,30]. Although these techniques can identify the unimportant connections, they create unwanted sparsity in DNNs.…”
Section: B Pruning Parameters and Connectionsmentioning
confidence: 99%
“…Hybrid solutions may jointly exploit the complexity of the input problem with the accuracy imposed at the application level. For instance, the authors of [35] introduce the concept of multi-level classification where the classification task can be performed at different levels of semantic abstraction: the higher the abstraction, the easier the classification problem. Then, depending on the abstraction level and the desired accuracy, the ConvNet is tuned to achieve the maximum energy efficiency.…”
Section: Adaptive Convnetsmentioning
confidence: 99%