Proceedings of the 2018 Workshop on Advanced Tools, Programming Languages, and PLatforms for Implementing and Evaluating Algori 2018
DOI: 10.1145/3231104.3231114
|View full text |Cite
|
Sign up to set email alerts
|

Turn of the Carousel - What Does Edge Computing Change for Distributed Applications?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 3 publications
0
5
0
Order By: Relevance
“…But object popularity can be difficult to estimate and can change over time, specially at the level of small geographical areas (as in the case of areas served by an edge server) [166]. Other papers [118][119][120][121][122]167] present more a high-level view of the different components of the application system, without specific contributions in terms of cache management policies (e.g., they apply minor changes to exact caching policies like LRU or LFU). Some recent papers [127,168,169] propose online caching policies that try to minimize the total cost of the system (the sum of the dissimilarity cost and the fetching cost), also in a networked context [168,170], but their schemes apply only to the case 𝑘 = 1, which is of limited practical interest.…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…But object popularity can be difficult to estimate and can change over time, specially at the level of small geographical areas (as in the case of areas served by an edge server) [166]. Other papers [118][119][120][121][122]167] present more a high-level view of the different components of the application system, without specific contributions in terms of cache management policies (e.g., they apply minor changes to exact caching policies like LRU or LFU). Some recent papers [127,168,169] propose online caching policies that try to minimize the total cost of the system (the sum of the dissimilarity cost and the fetching cost), also in a networked context [168,170], but their schemes apply only to the case 𝑘 = 1, which is of limited practical interest.…”
Section: Discussionmentioning
confidence: 99%
“…Likewise, recommender systems can leverage similarity caches [110,128]: a recommender system can save operating costs and decrease its response time through recommendation of relevant contents from a cache to user-generated queries, i.e., in case of a cache miss, an application proxy (e.g., YouTube) running close to the helper node (e.g., at a multiaccess edge computing server) can recommend the most related files that are locally cached. More recently, similarity caches have been employed extensively for machine learning based inference systems to store queries and the respective inference results to serve future requests, for example, prediction serving systems [117], image recognition systems [118,119,121], object classification on the cloud [122], caching hidden layer outputs of a neural network to accelerate computation [123], network traffic classification tasks [145]. The cache can indeed respond with the results of a previous query that is very similar to the current one, thus reducing the computational burden and latency of running complex machine learning inference models.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations