2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00638
|View full text |Cite
|
Sign up to set email alerts
|

AOGNets: Compositional Grammatical Architectures for Deep Learning

Abstract: Neural architectures are the foundation for improving performance of deep neural networks (DNNs). This paper presents deep compositional grammatical architectures which harness the best of two worlds: grammar models and DNNs. The proposed architectures integrate compositionality and reconfigurability of the former and the capability of learning rich features of the latter in a principled way. We utilize AND-OR Grammar (AOG) [55,75,74] as network generator in this paper and call the resulting networks AOGNets. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(14 citation statements)
references
References 49 publications
0
14
0
Order By: Relevance
“…Some recent substantial models have been developed based on the base models, such as Res2Net [83] and Wide ResNet [84] using the ResNet model; while Log Dense Net [85] and Sparse Net [86] using the DenseNet model. Another path of development combining multiple base models has resulted in a number of hybrid models including AOGNet [87] , PNASNet [88] , AmoebaNet [89] , DPN [90] , HCGNet [76] , GCNet [91] , ThiNet [92] , and SKNet [93] etc.…”
Section: Model Developmentmentioning
confidence: 99%
“…Some recent substantial models have been developed based on the base models, such as Res2Net [83] and Wide ResNet [84] using the ResNet model; while Log Dense Net [85] and Sparse Net [86] using the DenseNet model. Another path of development combining multiple base models has resulted in a number of hybrid models including AOGNet [87] , PNASNet [88] , AmoebaNet [89] , DPN [90] , HCGNet [76] , GCNet [91] , ThiNet [92] , and SKNet [93] etc.…”
Section: Model Developmentmentioning
confidence: 99%
“…We have designed CNN models by the widely used strategy of ''stage-wise building-block'' [37]. As shown in Figure 2, the CNN model consists of several ''RANGE''s, each RANGE has some convolutional layers, and each convolutional layer is followed by a batch normalization layer and a leaky rectified linear activation layer (Leaky ReLU).…”
Section: Cnn Model Design and Parameters For Optimizationmentioning
confidence: 99%
“…• Networks imbued with logical structure, e.g., probabilistic decision trees. By forcing the network to reason logically, we may be better able to understand its reasoning process (e.g., Wu and Song [63], Li, Song and Wu [64]).…”
Section: Interpretable ML As the Way Forward In Embryo Selectionmentioning
confidence: 99%