2021
DOI: 10.14778/3467861.3467869
|View full text |Cite
|
Sign up to set email alerts
|

Tensors

Abstract: Deep Learning (DL) has created a growing demand for simpler ways to develop complex models and efficient ways to execute them. Thus, a significant effort has gone into frameworks like PyTorch or TensorFlow to support a variety of DL models and run efficiently and seamlessly over heterogeneous and distributed hardware. Since these frameworks will continue improving given the predominance of DL workloads, it is natural to ask what else can be done with them. This is not a trivial question since these frameworks … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 46 publications
0
2
0
Order By: Relevance
“…Deep Learning Compilers. Deep learning Compilers have been one of the main focuses of the research community due to the requirement of flexibly deploying ML models on modern hardware platforms [10,24,29,41,51,52,55,56]. Compilers like Apache TVM [6], Facebook's Glow [11], Intel's nGraph [8], Nvidia's Ten-sorRT [44], Google's XLA [38] and Tensorflow Lite [27] are noteworthy compilers that are widely used to compile deep learning models.…”
Section: Related Workmentioning
confidence: 99%
“…Deep Learning Compilers. Deep learning Compilers have been one of the main focuses of the research community due to the requirement of flexibly deploying ML models on modern hardware platforms [10,24,29,41,51,52,55,56]. Compilers like Apache TVM [6], Facebook's Glow [11], Intel's nGraph [8], Nvidia's Ten-sorRT [44], Google's XLA [38] and Tensorflow Lite [27] are noteworthy compilers that are widely used to compile deep learning models.…”
Section: Related Workmentioning
confidence: 99%
“…We believe graph processing and relational operators, two use cases very different from DL, in high demand, are two candidates that complement quite well with DL workloads. Much work is needed in understanding how to translate graph processing and relational operator algorithms into tensor computations [169].…”
Section: Future Work Related To Hummingbirdmentioning
confidence: 99%