2015
DOI: 10.1007/978-3-319-25159-2_49
|View full text |Cite
|
Sign up to set email alerts
|

Building Program Vector Representations for Deep Learning

Abstract: Abstract-Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, which are th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
55
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 101 publications
(55 citation statements)
references
References 35 publications
0
55
0
Order By: Relevance
“…CLCMiner is based on revision histories; it is limited to detect cross-language clones that have been changed in the past in the same project. For clones that are never changed, we can explore more language attributes that can identify clone relations (e.g., using deep learning to build vector representation of programs [21]) across languages. We also believe this limitation can be compensated by a single-language detector that can detect cross-project and same-language clones based on certain clone transitivity across projects and languages.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…CLCMiner is based on revision histories; it is limited to detect cross-language clones that have been changed in the past in the same project. For clones that are never changed, we can explore more language attributes that can identify clone relations (e.g., using deep learning to build vector representation of programs [21]) across languages. We also believe this limitation can be compensated by a single-language detector that can detect cross-project and same-language clones based on certain clone transitivity across projects and languages.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…For each subtree, with parent node p and n child nodes {c i } 1≤i ≤n , define l i = (# leaves of c i )/(# leaves of p). Similar to [8], we define a loss function to measure how well the learnt vectors are describing the subtrees. Let T be the number of distinct AST types whose embeddings we are trying to learn.…”
Section: Methodsmentioning
confidence: 99%
“…To achieve this goal, we mine code fragments where violations are localized and identify common patterns, not only in fixed violations but also in unfixed violations. Before describing our approach of mining common code patterns, we formalize the definition of a code pattern, and provide justifications for the techniques selected in the approach (namely CNNs [18], [31], [32] and X-means clustering algorithm [19]).…”
Section: Mining Common Code Patternsmentioning
confidence: 99%