DOI: 10.14711/thesis-b1198629
|View full text |Cite
|
Sign up to set email alerts
|

Some research issues in hash function learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…In this study, the convolutional layers all have four kernel filters of kernel size 2 × 2 and a step size of one with no padding around the input data. This activation function is the rectified linear unit (ReLU) [8] . Pooling reduces the size mapping of the features by about half.…”
Section: Image Classification Modelmentioning
confidence: 99%
“…In this study, the convolutional layers all have four kernel filters of kernel size 2 × 2 and a step size of one with no padding around the input data. This activation function is the rectified linear unit (ReLU) [8] . Pooling reduces the size mapping of the features by about half.…”
Section: Image Classification Modelmentioning
confidence: 99%