Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.258
|View full text |Cite
|
Sign up to set email alerts
|

Cold-Start and Interpretability: Turning Regular Expressions into Trainable Recurrent Neural Networks

Abstract: Neural networks can achieve impressive performance on many natural language processing applications, but they typically need large labeled data for training and are not easily interpretable. On the other hand, symbolic rules such as regular expressions are interpretable, require no training, and often achieve decent accuracy; but rules cannot benefit from labeled data when available and hence underperform neural networks in rich-resource scenarios. In this paper, we propose a type of recurrent neural networks … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 24 publications
(25 citation statements)
references
References 33 publications
0
25
0
Order By: Relevance
“…An i-FST is much more compact and faster but it still: (1) has too many parameters (especially T ∈ R V ×K×K ) compared to a traditional BiRNN; and (2) is unable to incorporate external word embeddings. To tackle these problems, we adopt the tensor decomposition-based method proposed by Jiang et al (2020) and modify the forward and backward score computation accordingly (Steps 3 and 4 in Algorithm 2).…”
Section: Parameter Tensor Decomposition: the Last Step Towards Fstrnnmentioning
confidence: 99%
See 2 more Smart Citations
“…An i-FST is much more compact and faster but it still: (1) has too many parameters (especially T ∈ R V ×K×K ) compared to a traditional BiRNN; and (2) is unable to incorporate external word embeddings. To tackle these problems, we adopt the tensor decomposition-based method proposed by Jiang et al (2020) and modify the forward and backward score computation accordingly (Steps 3 and 4 in Algorithm 2).…”
Section: Parameter Tensor Decomposition: the Last Step Towards Fstrnnmentioning
confidence: 99%
“…For static word embeddings such as GloVe, we again adopt the method of Jiang et al (2020). Let E w ∈ R V ×D denote the word embedding matrix we want to incorporate.…”
Section: Cp Decomposition (Cpd)mentioning
confidence: 99%
See 1 more Smart Citation
“…Luo et al (2018) concatenates information of regular expressions to word embeddings for spoken language understanding. Jiang et al (2020) uses an RNN to model regular expressions for text classification tasks. Most of these works provide effective ways to utilize word-level knowledge, but none of them formally considers the quality issues with the distantly-labeled rationales.…”
Section: Related Workmentioning
confidence: 99%
“…Recent studies have shown an increasing interest in incorporating human knowledge into neural network models (Xu et al, 2018;Vashishth et al, 2018;Luo et al, 2018;Li and Srikumar, 2019;Jiang et al, 2020). For many natural language processing (NLP) tasks, such domain knowledge often refers to salient words annotated by human experts, which are also called rationales.…”
Section: Introductionmentioning
confidence: 99%