2021
DOI: 10.48550/arxiv.2106.03796
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Enabling On-Device Self-Supervised Contrastive Learning With Selective Data Contrast

Abstract: After a model is deployed on edge devices, it is desirable for these devices to learn from unlabeled data to continuously improve accuracy. Contrastive learning has demonstrated its great potential in learning from unlabeled data. However, the online input data are usually none independent and identically distributed (non-iid) and edge devices' storages are usually too limited to store enough representative data from different data classes. We propose a framework to automatically select the most representative… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…Goal 3: The quantum architecture could be trained on classical computers correctly and efficiently. Although training DNN on classical computers is expensive and inefficient [46]- [48], we still have to train QNN on classical computers because the near-term quantum computer (NISQ) has a limited number of qubits and high noise on each qubit, resulting in the QNN training on quantum computers being unstable and not scalable. A straightforward way to achieve the equivalent training on classical computers is to formulate each quantum gate as a unitary matrix.…”
Section: Qf-mixnn: a Quantum Neural Architecturementioning
confidence: 99%
“…Goal 3: The quantum architecture could be trained on classical computers correctly and efficiently. Although training DNN on classical computers is expensive and inefficient [46]- [48], we still have to train QNN on classical computers because the near-term quantum computer (NISQ) has a limited number of qubits and high noise on each qubit, resulting in the QNN training on quantum computers being unstable and not scalable. A straightforward way to achieve the equivalent training on classical computers is to formulate each quantum gate as a unitary matrix.…”
Section: Qf-mixnn: a Quantum Neural Architecturementioning
confidence: 99%