Proceedings of the 2017 ACM on Conference on Information and Knowledge Management 2017
DOI: 10.1145/3132847.3133011
|View full text |Cite
|
Sign up to set email alerts
|

Regularized and Retrofitted models for Learning Sentence Representation with Context

Abstract: Vector representation of sentences is important for many text processing tasks that involve classifying, clustering, or ranking sentences. For solving these tasks, bag-of-word based representation has been used for a long time. In recent years, distributed representation of sentences learned by neural models from unlabeled data has been shown to outperform traditional bag-of-words representations. However, most existing methods belonging to the neural models consider only the content of a sentence, and disrega… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…Finally, for context prediction, instead of performing a random walk, we select nodes based on their similarity in the graph. Similar similarity-based graph has shown impressive results in learning sentence representations (Saha et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…Finally, for context prediction, instead of performing a random walk, we select nodes based on their similarity in the graph. Similar similarity-based graph has shown impressive results in learning sentence representations (Saha et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…The edge is defined by the distance measure d(i, j) between tweets t i and t j , where the value of d represents how similar the two tweets are. Similar similarity-based graph has shown impressive results in learning sentence representations (Saha et al 2017). To find the nearest instances efficiently, we used k-d tree data structure (Witten et al 2016).…”
Section: Graph Constructionmentioning
confidence: 99%