2021
DOI: 10.48550/arxiv.2101.03181
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Miniaturizing neural networks for charge state autotuning in quantum dots

Stefanie Czischek,
Victor Yon,
Marc-Antoine Genest
et al.
Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…With the NISQ era on the horizon [38], it is important to consider the practical aspect of implementing automated control as part of the device itself, in the "on-chip" fashion. The network architecture necessary for RBC is significantly simpler and smaller than for CNN classification, making it more suitable for an implementation on miniaturized hardware with low power consumption in the near future [39,40]. In particular, the neural network used to train the RBC comprises only four fully connected dense layers with 128, 64, 32, and 5 units, respectively.…”
Section: Discussionmentioning
confidence: 99%
“…With the NISQ era on the horizon [38], it is important to consider the practical aspect of implementing automated control as part of the device itself, in the "on-chip" fashion. The network architecture necessary for RBC is significantly simpler and smaller than for CNN classification, making it more suitable for an implementation on miniaturized hardware with low power consumption in the near future [39,40]. In particular, the neural network used to train the RBC comprises only four fully connected dense layers with 128, 64, 32, and 5 units, respectively.…”
Section: Discussionmentioning
confidence: 99%
“…We developed a physics-inspired simulator and introduced crossdevice validation to address this challenge. In the context of tuning quantum devices, deep learning has been used for various other tasks [17,19,[23][24][25][26][27], with some approaches using simulated data to train their algorithms [15,16,28].…”
Section: Introductionmentioning
confidence: 99%