2022 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2022
DOI: 10.23919/date54114.2022.9774739
|View full text |Cite
|
Sign up to set email alerts
|

Hardware Acceleration of Explainable Machine Learning

Abstract: Machine learning (ML) is successful in achieving human-level artificial intelligence in various fields. However, it lacks the ability to explain an outcome due to its blackbox nature. While recent efforts on explainable AI (XAI) has received significant attention, most of the existing solutions are not applicable in real-time systems since they map interpretability as an optimization problem, which leads to numerous iterations of time-consuming complex computations. Although there are existing hardware-based a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…It was the careful reformulation and combination of methods presented in this paper that was necessary to achieve the demonstrated level of performance. Indeed, even as hardware becomes more efficient and higher performing, we expect the same basic advantage to hold: low-precision arithmetics on specialized hardware, such as tensor cores and tensor processing units, , or field-programmable gate arrays and neuromorphic processors, will be substantially faster than higher precision arithmetics on those same architectures. In this way, our results present a general road map for higher-performance QMD simulations also using future accelerated hardware.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…It was the careful reformulation and combination of methods presented in this paper that was necessary to achieve the demonstrated level of performance. Indeed, even as hardware becomes more efficient and higher performing, we expect the same basic advantage to hold: low-precision arithmetics on specialized hardware, such as tensor cores and tensor processing units, , or field-programmable gate arrays and neuromorphic processors, will be substantially faster than higher precision arithmetics on those same architectures. In this way, our results present a general road map for higher-performance QMD simulations also using future accelerated hardware.…”
Section: Discussionmentioning
confidence: 99%
“…In this article, we explore how tensor cores can be used as an effective tool to accelerate QMD simulations. Tensor cores, and the closely related tensor processing units, are a new form of hardware designed for calculations involving deep neural networks in machine learning applications and provide an extraordinary amount of computational speed and energy efficiency . However, peak performance is limited to tensor contractions, i.e., matrix–matrix multiplications, using only low, mixed-precision floating-point operations.…”
Section: Introductionmentioning
confidence: 99%
“…The original image dataset is scaled down again using the operator-supervised bicubic method. The operator is used here, assuming the operator is a perfect AI engine compared to the distance-median algorithm to check the image quality of down-scaling [22]- [25]. Figure 3 shows the research design of the Plumeria L classification.…”
Section: Methodsmentioning
confidence: 99%
“…Since its public release, TPU accelerators have been applied to various research topics that demand very high processing and memory capabilities. Examples of those domains are DNN training subject to large batch sizes and specialized learning rate algorithms [17], [18], distributed evolution strategies for meta-learning [19], acceleration of explainable machine learning [20], and simulation of quantum physics [21], to mention a few. Regarding the transformer, NLP and CV projects constitute the majority of TPU-assisted research using this architecture, partially due to the high availability of pre-trained models and TPU-ready software implementations.…”
Section: Applications Of Tensor Processing Units To Transformers and ...mentioning
confidence: 99%