2020
DOI: 10.48550/arxiv.2005.06980
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Multi-Perspective Architecture for Semantic Code Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(13 citation statements)
references
References 0 publications
0
13
0
Order By: Relevance
“…Many application domains and previous studies [ 60 , 61 , 62 , 63 ] used various evaluation metrics depending on the recommender systems. To determine how effectively a recommender system performs, we used the following evaluation measures.…”
Section: Experimental Evaluationsmentioning
confidence: 99%
“…Many application domains and previous studies [ 60 , 61 , 62 , 63 ] used various evaluation metrics depending on the recommender systems. To determine how effectively a recommender system performs, we used the following evaluation measures.…”
Section: Experimental Evaluationsmentioning
confidence: 99%
“…In contrast,Yao et al 23 employs a reinforcement learning framework to produce code annotations for code retrieval, and then distinguishes between relevant and non-related code snippets using the generated annotations. On the other hand, Haldar et al 18 utilizes an LSTM encoder to represent code snippets, which are generated based on both unprocessed tokens and AST-converted text sequences. For semantic code retrieval, their approach employs a bilateral multi-perspective matching model.…”
Section: Related Workmentioning
confidence: 99%
“…CAT proposed by 18 represents programming codes using unprocessed tokens with AST's sequence of strings using sequence encoders. Additionally, the authors developed a hybrid model known as MPCAT and added multi-perspective matching procedures 63 to CAT.…”
Section: Catmentioning
confidence: 99%
See 1 more Smart Citation
“…A second observation is that the strong trend has been to squeeze ever more information out of the source code being E S C *Loyola et al (2017) [16] x *Lu et al (2017) [17] x *Jiang et al (2017) [18] x *Hu et al (2018) [19] x *Hu et al (2018) [9] x x *Allamanis et al (2018) [20] x x *Wan et al (2018) [21] x x *Liang et al (2018) [22] x x *Alon et al (2019) [23], [24] x x *Gao et al (2019) [25] x *LeClair et al (2019) [7] x x *Mesbah et al (2019) [26] x x *Nie et al (2019) [27] x x *Haldar et al (2020) [28] x x *Ahmad et al (2020) [29] x *Haque et al (2020) [30] x x *Zügner et al (2021) [8] x x *Liu et al (2021) [31] x x *<this paper> x x Fig. 1.…”
Section: A Source Code Summarizationmentioning
confidence: 99%