2015
DOI: 10.48550/arxiv.1503.00075
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
372
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 247 publications
(374 citation statements)
references
References 0 publications
1
372
0
1
Order By: Relevance
“…CNN-based models focus on patterns of words across space, whereas RNN-based ones can better capture time-level features of words. Tai et al [13] introduced a generalization of the standard Long Short-Term Memory (LSTM) architecture to tree-structured network topologies, which achieves high performance for representing sentence meaning over a sequential LSTM.…”
Section: Text Representation Learningmentioning
confidence: 99%
“…CNN-based models focus on patterns of words across space, whereas RNN-based ones can better capture time-level features of words. Tai et al [13] introduced a generalization of the standard Long Short-Term Memory (LSTM) architecture to tree-structured network topologies, which achieves high performance for representing sentence meaning over a sequential LSTM.…”
Section: Text Representation Learningmentioning
confidence: 99%
“…Since an AST could fully represent the lexical and grammatical structure of the source code without recording all details of the code (such as punctuation marks), it is widely used in many software engineering techniques/tools. Some AST-based deep learning code embedding techniques include RvNN, TBCNN, Tree-LSTM, ASTNN and code2vec [15], [16], [18]- [20]. In this paper, we mainly use code2vec to represent the semantics of method code.…”
Section: B Ast-based Code Embeddingmentioning
confidence: 99%
“…A next generation of methods use word embedding techniques, like the word2vec (Mikolov et al, 2013a,b) and GloVe (Pennington et al, 2014), which furthermore consider the neighborhood relation of words and can benefit from external corpora for training the representations. Another line of work focuses on the sequential structure of tokens within text and uses deep learning architectures like the CNN's, as in (Kim, 2014), or RNN's, as e.g., in (Tai et al, 2015) to capture semantic information.…”
Section: Related Work 21 Text Classificationmentioning
confidence: 99%