2020
DOI: 10.1016/j.inffus.2020.03.003
|View full text |Cite
|
Sign up to set email alerts
|

Feature distillation network for aspect-based sentiment analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 41 publications
(14 citation statements)
references
References 18 publications
0
14
0
Order By: Relevance
“…To address the limitations in RNN such as lacking position invariance and lacking sensitivity to local key patterns, Liu and Shen [13] proposed a Gated Alternate Neural Network(GANN), which designs a special module named the Gate Truncation RNN (GTR) to learn informative aspectdependent sentiment clue representations. Shuang et al [14] proposed a feature distillation network (FDN) for reducing noise and distilling aspect-relevant sentiment features. To address the problem of insufficient aspect representation learning, Jiang et al [15] proposed a mutually enhanced transformation network (METNet) for the ABSA task.…”
Section: B Methods Based On Recurrent Neural Networkmentioning
confidence: 99%
“…To address the limitations in RNN such as lacking position invariance and lacking sensitivity to local key patterns, Liu and Shen [13] proposed a Gated Alternate Neural Network(GANN), which designs a special module named the Gate Truncation RNN (GTR) to learn informative aspectdependent sentiment clue representations. Shuang et al [14] proposed a feature distillation network (FDN) for reducing noise and distilling aspect-relevant sentiment features. To address the problem of insufficient aspect representation learning, Jiang et al [15] proposed a mutually enhanced transformation network (METNet) for the ABSA task.…”
Section: B Methods Based On Recurrent Neural Networkmentioning
confidence: 99%
“…Their experiments demonstrated that both an increase in embedding dimensionality and an increase in the volume of health‐related training data could improve classification accuracy. As opposed to this finding, a comparative study with the conventional BoW model showed that in many cases the BoW model is superior to the word embeddings, particularly in the application of sentiment analysis (Blair et al, 2020; Shuang et al, 2020).…”
Section: Related Workmentioning
confidence: 96%
“…According to unimodal affect recognition based on the text [126,402], audio [178,194,199], visual [253,281,403], EEG [404], or ECG [336], we can find that the most widely used modality is the visual signal, mainly consisting of facial expressions and body gestures. The number of visual-based emotion recognition systems is comparable to the sum of that of systems based on other modalities since the visual signals are easier to capture than other signals and emotional information in visual signals is more helpful than other signals in recognizing the emotion state of human beings.…”
Section: Effects Of Different Signals On Unimodal Affect Recognitionmentioning
confidence: 99%