2022 IEEE International Conference on Cloud Engineering (IC2E) 2022
DOI: 10.1109/ic2e55432.2022.00032
|View full text |Cite
|
Sign up to set email alerts
|

Guarding Against Universal Adversarial Perturbations in Data-driven Cloud/Edge Services

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 21 publications
0
1
0
Order By: Relevance
“…Regardless of whether we are using CPUs or GPUs for inference executing, an inference request often goes through logical steps of loading models and waiting in queues. As prior studies demonstrate, model loading time can be orders of magnitude higher than other inference time components using current deep learning frameworks like TensorFlow or with serverless computing [86]- [90]. One naïve solution to reduce model loading time is to keep models in memory; this might work well for popular models but can lead to low resource utilization for less popular models.…”
Section: Iot and Ai Are Becoming The Main Applicationsmentioning
confidence: 99%
“…Regardless of whether we are using CPUs or GPUs for inference executing, an inference request often goes through logical steps of loading models and waiting in queues. As prior studies demonstrate, model loading time can be orders of magnitude higher than other inference time components using current deep learning frameworks like TensorFlow or with serverless computing [86]- [90]. One naïve solution to reduce model loading time is to keep models in memory; this might work well for popular models but can lead to low resource utilization for less popular models.…”
Section: Iot and Ai Are Becoming The Main Applicationsmentioning
confidence: 99%