Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3470808
|View full text |Cite
|
Sign up to set email alerts
|

Explainability for Natural Language Processing

Abstract: This lecture-style tutorial, which mixes in an interactive literature browsing component, is intended for the many researchers and practitioners working with text data and on applications of natural language processing (NLP) in data science and knowledge discovery. The focus of the tutorial is on the issues of transparency and interpretability as they relate to building models for text and their applications to knowledge discovery. As black-box models have gained popularity for a broad range of tasks in recent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 3 publications
0
3
0
Order By: Relevance
“…We suggest the use of SHAP (Lundberg & Lee, 2017), an open source, game-based approach for explaining the outcomes of ML models to support the comparison of their XAI, recently used on multimodal biosensing data for understating stress detection (Chalabianloo et al, 2022). In addition, XAI approaches can be particularly used to explain why an NLP model made a specific prediction, allowing biases in the model's decision-making process to be identified and corrected (Danilevsky et al, 2021).…”
Section: Towards User's Comparative Exploration Of Ai-based Models An...mentioning
confidence: 99%
“…We suggest the use of SHAP (Lundberg & Lee, 2017), an open source, game-based approach for explaining the outcomes of ML models to support the comparison of their XAI, recently used on multimodal biosensing data for understating stress detection (Chalabianloo et al, 2022). In addition, XAI approaches can be particularly used to explain why an NLP model made a specific prediction, allowing biases in the model's decision-making process to be identified and corrected (Danilevsky et al, 2021).…”
Section: Towards User's Comparative Exploration Of Ai-based Models An...mentioning
confidence: 99%
“…Since eXplainable Artificial Intelligence (XAI) systems have become an integral part of many real-world applications, there is an increasing number of XAI approaches [84] including white and black boxes. The first group, which includes decision trees, hidden Markov models, logistic regressions, and other machine learning algorithms, are inherently explainable; whereas, the second group, which includes deep learning models, are less explainable [40]. XAI has been characterized according to different aspects, for example, (i) by the the level of the explainability, for each single prediction (local explanation) or the model's prediction process as a whole (global explanation); (ii) and if the explanation requires post-processing (post-hoc) or not (self-explaining).…”
Section: On the Explainability Of Ai Modelsmentioning
confidence: 99%
“…XAI has also been characterized in accordance to the source of the explanations, for example: (i) surrogate models, in which the model predictions are explained by learning a second model as a proxy, such is the case of LIME [162]; (ii) example-driven, in which the prediction of an input instance is explained by identifying other (labeled) instances that are semantically similar [35]; (iii) attention layers, which appeal to human intuition and help to indicate where the neural network model is "focusing"; and (iv) feature importance, in which the relevance scores of different features are used to output the final prediction [40].…”
Section: On the Explainability Of Ai Modelsmentioning
confidence: 99%