2020
DOI: 10.1016/j.dss.2020.113302
|View full text |Cite
|
Sign up to set email alerts
|

Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
40
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 105 publications
(40 citation statements)
references
References 22 publications
0
40
0
Order By: Relevance
“…Several methods based on statistical machine learning (e.g., naive Bayes, decision trees, hidden Markov models, maximum entropy, and look‐up tables) and deep learning (e.g., convolutional neural network and recurrent neural network) algorithms can be employed to enhance the accuracy and efficiency of syntactic and semantic analysis of sentences (Kim, Park, & Suh, 2020; Laylavi, Rajabifard, & Kalantari, 2017; Luo, 2021; Luo & Chen, 2020; Zhang et al., 2011). The main aim of our study was to investigate the potential of UGC for DHM.…”
Section: Methodsmentioning
confidence: 99%
“…Several methods based on statistical machine learning (e.g., naive Bayes, decision trees, hidden Markov models, maximum entropy, and look‐up tables) and deep learning (e.g., convolutional neural network and recurrent neural network) algorithms can be employed to enhance the accuracy and efficiency of syntactic and semantic analysis of sentences (Kim, Park, & Suh, 2020; Laylavi, Rajabifard, & Kalantari, 2017; Luo, 2021; Luo & Chen, 2020; Zhang et al., 2011). The main aim of our study was to investigate the potential of UGC for DHM.…”
Section: Methodsmentioning
confidence: 99%
“…Additionally, the improved explainability can lead to a greater support for the work with such systems [6]. Regarding decision support, explainability can also lead to enhanced fairness [32]. To increase the explainability of the system, different techniques and visualizations can be utilized and combined [5][6][7].…”
Section: Definition Of Design Principles and Featuresmentioning
confidence: 99%
“…(DF 2) Provide the confidence for the present classification. The probability of the classification will be represented as the confidence of the AI system which is also an eponymous goal of XAI [32]. This is represented by the probability value of the classification in percentage.…”
Section: Definition Of Design Principles and Featuresmentioning
confidence: 99%
“…Imaging is one of the emerging fields in DL, the majority of works tried to explain imaging systems from specific problems [ 70 , 71 ]. However, language processing accompanied with the availability of large text dataset became centre of interest to many researchers, one remarkable work was done by [ 72 ] for huge text corpus explanation; although the imaging system is more clarified and flexible, the way the graph was generated doesn’t benefit from graph-based technologies that optimize the input starting from naive generation.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Example-based approaches ; research in this area is always conducted through a training-example, by specifying some initial observations which will be verified through features’ extraction, this discipline is widely adapted despite the difficulty of verifying the trustworthiness of each example, this covers: ✓ Gradient methods (e.g., Guided-back propagation, Layer-wise relevance propagation [ 72 ]), which aim to a better gradient optimization. ✓ Saliency-feature map [ 73 ] for measuring pattern importance within images and videos.…”
Section: Literature Reviewmentioning
confidence: 99%