Proceedings of the ACM Symposium on Cloud Computing 2021
DOI: 10.1145/3472883.3486974
|View full text |Cite
|
Sign up to set email alerts
|

Faa$T

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 64 publications
(17 citation statements)
references
References 16 publications
0
17
0
Order By: Relevance
“…The edge nodes are selected randomly from the set of nodes in the topology. Workload: For the workload, we adopted a trace-driven approach starting from a dataset of traces obtained in 2020 from a production system and publicly released by Microsoft [22]. The dataset contains a log of read/write backend activities of user applications, with associated geographical regions.…”
Section: A Methodology Assumptions and Toolsmentioning
confidence: 99%
“…The edge nodes are selected randomly from the set of nodes in the topology. Workload: For the workload, we adopted a trace-driven approach starting from a dataset of traces obtained in 2020 from a production system and publicly released by Microsoft [22]. The dataset contains a log of read/write backend activities of user applications, with associated geographical regions.…”
Section: A Methodology Assumptions and Toolsmentioning
confidence: 99%
“…As a result, a larger number of containers may be required to host more serverless functions, thereby raising the cost of infrastructure leasings, such as VM or pods. Previous studies have investigated several metrics to assess autoscaling solutions from a cost perspective, including resource cost [22,23], overhead cost [24,25], and additional serverful cost [26]. Resource cost pertains to the number of containers activated as invokers, workers, or servers to handle workloads.…”
Section: Costmentioning
confidence: 99%
“…This state can be used to cache locally the state that is expensive to obtain or compute [19], [20]. Recent proposals further extend FaaS platforms so that storage accesses are transparently cached within function containers [37], [38]. As cache hits are served from local memory, applications can see speedups of up to 92% [38].…”
Section: Palette Load Balancing: Locality Hints For Serverless Functionsmentioning
confidence: 99%
“…As a second example, we look at a FaaS implementation of Dask [42], where each DAG node is executed as a serverless invocation. We use the Faa$T [38] distributed serverless cache running on the Azure Functions host [43], and augment its interface to also take locality hints for intermediate DAG data. Our results show that Palette reduces run times by 46% and 18% on Task Bench and TPC-H, respectively, compared to a locality-oblivious FaaS platform.…”
Section: Palette Load Balancing: Locality Hints For Serverless Functionsmentioning
confidence: 99%
See 1 more Smart Citation