Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.726
|View full text |Cite
|
Sign up to set email alerts
|

Feature Projection for Improved Text Classification

Abstract: In classification, there are usually some good features that are indicative of class labels. For example, in sentiment classification, words like good and nice are indicative of the positive sentiment and words like bad and terrible are indicative of the negative sentiment. However, there are also many common features (e.g., words) that are not indicative of any specific class (e.g., voice and screen, which are common to both sentiment classes and are not discriminative for classification). Although deep learn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 53 publications
(29 citation statements)
references
References 29 publications
0
29
0
Order By: Relevance
“…DESYR leverages representation learning by exercising a novel dependency-inspired variant of the Poincaré embedding [25]. Furthermore, we aim at amputating the class-invariant features, and obtaining superior class-representing features by incorporating the technique for the feature projection, highlighted by Qin et al [30]. DESYR's backbone comprises of two networks in parallel -the regulation-net (r-net) and the spotlight-net (s-net).…”
Section: Proposed Methodology: Desyrmentioning
confidence: 99%
See 1 more Smart Citation
“…DESYR leverages representation learning by exercising a novel dependency-inspired variant of the Poincaré embedding [25]. Furthermore, we aim at amputating the class-invariant features, and obtaining superior class-representing features by incorporating the technique for the feature projection, highlighted by Qin et al [30]. DESYR's backbone comprises of two networks in parallel -the regulation-net (r-net) and the spotlight-net (s-net).…”
Section: Proposed Methodology: Desyrmentioning
confidence: 99%
“…It functions as a network trained in parallel to the s-net. As highlighted in [12,30], we employ a Gradient Reversal Layer (GRL) to capture the class-invariant features. In a nutshell, a gradient-reversal layer can be thought of as a pseudo-functional mapping where the forward and backward propagation are respectively defined by two opposed equations as follows:…”
Section: Regulation Net (R-net)mentioning
confidence: 99%
“…•Feature Projection (FP): It is a novel approach to improve representation learning through feature projection. Existing features are projected into an orthogonal space (Qin et al, 2020).…”
Section: Comparison With Baselinesmentioning
confidence: 99%
“…Our work is related to sentiment classification (Liu, 2012), lifelong learning and continual learning. For sentiment classification, recent deep learning models have been shown to outperform traditional methods (Kim, 2014;Devlin et al, 2018;Shen et al, 2018;Qin et al, 2020). However, these models don't retain or transfer the knowledge to new tasks.…”
Section: Related Workmentioning
confidence: 99%