2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8852453
|View full text |Cite
|
Sign up to set email alerts
|

Quantum-Inspired Neural Architecture Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 21 publications
(19 citation statements)
references
References 14 publications
0
18
0
1
Order By: Relevance
“…Furthermore, the proposed SODBAE (BLSK-NS) model achieves an error rate of 9.72% and outperforms the remaining related studies, including Q-NAS [20], EPSO-CNN [8], SMAC with predictive termination [9], SMAC [9], HORD [10], NMM [11], PSO-b [34], and GA-CNN [35], by 1.28%, 10.13%, 7.48%, 7.75%, 10.82%, 0.33%, 8.81% and 15.69%, respectively. Moreover, the proposed SODBAE (BLSK-S) method achieves an error rate of 13.74% and outperforms related studies, e.g., EPSO-CNN [8], SMAC with predictive termination [9], SMAC [9], HORD [10], PSO-b [34], and GA-CNN [35], by 6.11%, 3.46%, 3.73%, 6.8%, 4.79% and 11.67%, respectively.…”
Section: Comparison With Related Studiesmentioning
confidence: 82%
See 4 more Smart Citations
“…Furthermore, the proposed SODBAE (BLSK-NS) model achieves an error rate of 9.72% and outperforms the remaining related studies, including Q-NAS [20], EPSO-CNN [8], SMAC with predictive termination [9], SMAC [9], HORD [10], NMM [11], PSO-b [34], and GA-CNN [35], by 1.28%, 10.13%, 7.48%, 7.75%, 10.82%, 0.33%, 8.81% and 15.69%, respectively. Moreover, the proposed SODBAE (BLSK-S) method achieves an error rate of 13.74% and outperforms related studies, e.g., EPSO-CNN [8], SMAC with predictive termination [9], SMAC [9], HORD [10], PSO-b [34], and GA-CNN [35], by 6.11%, 3.46%, 3.73%, 6.8%, 4.79% and 11.67%, respectively.…”
Section: Comparison With Related Studiesmentioning
confidence: 82%
“…Large-Scale Evolution [1], Genetic CNN [2], and Hierarchical Representations [4] achieved comparatively better classification performances, but at the expense of significantly larger computational costs, i.e., 250 × 264, 20 × 24 and 200 × 36 GPU-hours. Q-NAS [20] illustrated a less competitive performance, but also with a significantly large computational cost, i.e., 20 × 50 GPU-hours. As such, these experimental settings may not be accessible to average consumers, owing to the vast resources requirements.…”
Section: Comparison With Related Studiesmentioning
confidence: 98%
See 3 more Smart Citations