Proceedings of the 44th Annual International Symposium on Computer Architecture 2017
DOI: 10.1145/3079856.3080254
|View full text |Cite
|
Sign up to set email alerts
|

SCNN

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
47
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 658 publications
(47 citation statements)
references
References 17 publications
0
47
0
Order By: Relevance
“…However, these functions result in dense activations which are computationally expensive (Kernell 2018). Modern neural networks commonly use the Rectified Linear Unit (ReLU) function because it results in sparse and efficient activations making it less computationally expensive (Parashar et al 2017).…”
Section: Artificial Neural Network (Anns)mentioning
confidence: 99%
See 4 more Smart Citations
“…However, these functions result in dense activations which are computationally expensive (Kernell 2018). Modern neural networks commonly use the Rectified Linear Unit (ReLU) function because it results in sparse and efficient activations making it less computationally expensive (Parashar et al 2017).…”
Section: Artificial Neural Network (Anns)mentioning
confidence: 99%
“…The VGG-16 model architecture is comprised of 16 weighted layers and is freely available for download online through GitHub. The VGG-16 is distinguished from other CNNs by the use of many convolutional layers with progressively smaller receptive fields, as opposed to only a few convolutional layers with large fields (Parashar et al 2017). The receptive fields in a CNN model represent the center area of focus (spatial extent) the model is looking at, as each convolution is applied.…”
Section: Artificial Neural Network (Anns)mentioning
confidence: 99%
See 3 more Smart Citations