Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data 2015
DOI: 10.1145/2723372.2749432
|View full text |Cite
|
Sign up to set email alerts
|

Resource Elasticity for Large-Scale Machine Learning

Abstract: Declarative large-scale machine learning (ML) aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single node, in-memory computations to distributed computations on MapReduce (MR) or similar frameworks. State-of-the-art compilers in this context are very sensitive to memory constraints of the master process and MR cluster configuration. Different memory configurations can lead to significant performance differences. Interestingly, resource negotiation f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 48 publications
(24 citation statements)
references
References 39 publications
0
24
0
Order By: Relevance
“…In contrast to loop fusion, the dependencies are implicitly given by the data flow graph and operation semantics [9]. SystemML uses rewrites to identify special operator patterns, and replaces them with hand-coded local or distributed fused operators [7,13,37]. Other systems like Cumulon [36] and MatFast [92] use more generic masked and folded binary operators to exploit sparsity across matrix multiplications and element-wise operations.…”
Section: Related Workmentioning
confidence: 99%
“…In contrast to loop fusion, the dependencies are implicitly given by the data flow graph and operation semantics [9]. SystemML uses rewrites to identify special operator patterns, and replaces them with hand-coded local or distributed fused operators [7,13,37]. Other systems like Cumulon [36] and MatFast [92] use more generic masked and folded binary operators to exploit sparsity across matrix multiplications and element-wise operations.…”
Section: Related Workmentioning
confidence: 99%
“…This engine is responsible to configure execution settings, such as memory allocation and number of map and reduce tasks, by answering questions on real and hypothetical input parameters using a random search algorithm. What-if analysis is also employed by [37] for optimally configuring memory configurations. The distinctive feature of this proposal is that it is dynamic in the sense that it can take decisions at runtime leading to task migrations.…”
Section: Execution Engine Configurationmentioning
confidence: 99%
“…[28] developed an approach to find a near-optimal memory configuration to finish building a machine learning model as soon as possible. The approach offers no guarantee on model building time.…”
Section: Related Workmentioning
confidence: 99%