Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1468
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Just a Few Keywords for Fine-Grained Aspect Detection Through Weakly Supervised Co-Training

Abstract: User-generated reviews can be decomposed into fine-grained segments (e.g., sentences, clauses), each evaluating a different aspect of the principal entity (e.g., price, quality, appearance). Automatically detecting these aspects can be useful for both users and downstream opinion mining applications. Current supervised approaches for learning aspect classifiers require many fine-grained aspect labels, which are labor-intensive to obtain. And, unfortunately, unsupervised topic models often fail to capture the a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
50
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 39 publications
(50 citation statements)
references
References 34 publications
0
50
0
Order By: Relevance
“…As illustrated in Figure 3, translated seed words (e.g., "parfait") often co-occur with other words (e.g., "aime," meaning "love") that have zero weight inẐ but are also helpful for the task at hand. To exploit such words in the absence of labeled target documents, we extend the monolingual weaklysupervised co-training method of Karamanolakis et al (2019) to our cross-lingual setting, and use our classifier based on translated seed words as a teacher to train a student, as we describe next. First, CLTS uses our classifier from Equation 5 as a teacher to predict labels q j for unlabeled documents x T j ∈ D T that contain seed words:…”
Section: Teacher-student Co-training In L Tmentioning
confidence: 99%
See 3 more Smart Citations
“…As illustrated in Figure 3, translated seed words (e.g., "parfait") often co-occur with other words (e.g., "aime," meaning "love") that have zero weight inẐ but are also helpful for the task at hand. To exploit such words in the absence of labeled target documents, we extend the monolingual weaklysupervised co-training method of Karamanolakis et al (2019) to our cross-lingual setting, and use our classifier based on translated seed words as a teacher to train a student, as we describe next. First, CLTS uses our classifier from Equation 5 as a teacher to predict labels q j for unlabeled documents x T j ∈ D T that contain seed words:…”
Section: Teacher-student Co-training In L Tmentioning
confidence: 99%
“…Note that our teacher with weights transferred across languages is different than that of Karamanolakis et al (2019), which simply "counts" seed words.…”
Section: Teacher-student Co-training In L Tmentioning
confidence: 99%
See 2 more Smart Citations
“…The four most commonly used algorithms for semi-supervised learning are Self-Training [17], Co-Training [18], Generative Model [19] and Graph-Based Semi-supervised [20]. The Self-Training algorithm refers to the use of a self-classifier to continuously generate high-confidence samples for improving the final classification performance.…”
Section: Semi-supervised Learningmentioning
confidence: 99%