Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP 2021
DOI: 10.18653/v1/2021.blackboxnlp-1.19
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Interpretable Clauses Semantically using Pretrained Word Representation

Abstract: Tsetlin Machine (TM) is an interpretable pattern recognition algorithm based on propositional logic, which has demonstrated competitive performance in many Natural Language Processing (NLP) tasks, including sentiment analysis, text classification, and Word Sense Disambiguation. To obtain human-level interpretability, legacy TM employs Boolean input features such as bag-of-words (BOW). However, the BOW representation makes it difficult to use any pre-trained information, for instance, word2vec and GloVe word re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…Barhom et al, 2019;Yadav et al, 2021a). For a fair comparison, we report the baseline performance by re-running Cattan et al (2021a) using gold mentions similar to the baseline used in Yadav et al (2021b). We compare this baseline to two variants of our model, based on intra-span and inter-span attention (Sec 3.2).…”
Section: Implementation Detailsmentioning
confidence: 99%
“…Barhom et al, 2019;Yadav et al, 2021a). For a fair comparison, we report the baseline performance by re-running Cattan et al (2021a) using gold mentions similar to the baseline used in Yadav et al (2021b). We compare this baseline to two variants of our model, based on intra-span and inter-span attention (Sec 3.2).…”
Section: Implementation Detailsmentioning
confidence: 99%
“…The performance is slightly below Bi-LSTM/GRU-based model because of its restriction to use pre-trained word embedding. However, Yadav et al [22] show that embedding similar words using a pre-trained word embedding significantly enhances the performance and outperforms the baselines. However, our proposed model only uses TM explainability to generate prerequisite word weightage to replace human attention input into neural network language models.…”
Section: Performance Comparison With State-of-the-artsmentioning
confidence: 99%
“…Unlike Deep Neural Netowrks (DNNs) and basic rule-based systems, TM learns rules in the same way that humans do, using logical reasoning, and it does so in a visible and interpretable manner [16,20]. TM has shown that it obtains a good trade-off between accuracy and interpretability on many NLP tasks [21,22]. However, there lie some limitations such as boolean bag-of-words input and incapable of using pre-trained information.…”
Section: Introductionmentioning
confidence: 99%