2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS) 2018
DOI: 10.1109/icdcs.2018.00154
|View full text |Cite
|
Sign up to set email alerts
|

Computation Offloading for Machine Learning Web Apps in the Edge Server Environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
42
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 68 publications
(42 citation statements)
references
References 18 publications
0
42
0
Order By: Relevance
“…Also, new functionality was added to other FAs. Inference was included to the 'Interfacing and visualization'-FA based on a prototype, where a pre-trained model was used for inference in a web browser [27]. Model packaging comprises adding of model(s) into executable or loadable file(s) (e.g.…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…Also, new functionality was added to other FAs. Inference was included to the 'Interfacing and visualization'-FA based on a prototype, where a pre-trained model was used for inference in a web browser [27]. Model packaging comprises adding of model(s) into executable or loadable file(s) (e.g.…”
Section: Discussionmentioning
confidence: 99%
“…When the location of architectural elements is analysed in detail, it can be seen that Data transformation and Serving functionality was executed in the edge server (transformation and serving of end users with augmented video streaming content [32]). Models were trained mainly in private/public clouds [36,37,39] or in edge devices [35,46] while the results were inferred from models in all environments [27,28,30,31,45]. Also, resource demanding Deep analytics and Machine learning, and Job scheduling tasks were executed mostly in private/public clouds, and/or edge environments.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations