2023 IEEE/ACM 20th International Conference on Mining Software Repositories (MSR) 2023
DOI: 10.1109/msr59073.2023.00065
|View full text |Cite
|
Sign up to set email alerts
|

Pre-trained Model Based Feature Envy Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…Pan et al [25] encoded Java source code files using a pre-trained CodeBERT model to aid software defect prediction. Kovačević et al [46] conducted experiments with the Code2Vec, Code2Seq, and CuBERT models to represent Java methods or classes as code embeddings, facilitating machine-learning-based detection of two code smells, i.e., long method and god class, while Ma et al [26] leveraged the CodeT5, CodeGPT, and CodeBERT models to detect the feature envy code smell. To compare software systems, Karakatič et al [47] utilized a pre-trained Code2Vec model to embed Java methods.…”
Section: Pre-trained Models In Code-related Tasksmentioning
confidence: 99%
See 1 more Smart Citation
“…Pan et al [25] encoded Java source code files using a pre-trained CodeBERT model to aid software defect prediction. Kovačević et al [46] conducted experiments with the Code2Vec, Code2Seq, and CuBERT models to represent Java methods or classes as code embeddings, facilitating machine-learning-based detection of two code smells, i.e., long method and god class, while Ma et al [26] leveraged the CodeT5, CodeGPT, and CodeBERT models to detect the feature envy code smell. To compare software systems, Karakatič et al [47] utilized a pre-trained Code2Vec model to embed Java methods.…”
Section: Pre-trained Models In Code-related Tasksmentioning
confidence: 99%
“…To achieve this, we evaluated the proposed method using two BERT-based code models, i.e., CodeBERT and GraphCodeBERT. These two code models were selected based on existing studies indicating their success in capturing code syntax and semantics, making them suitable for tasks requiring source code understanding [25][26][27][28][29][30][31][32]. To further train the pre-trained models of CodeBERT and GraphCodeBERT in a self-supervised manner, i.e., to perform task-adaptive pre-training, we constructed a dataset of more than 45,000 files affected by commit operations.…”
Section: Introductionmentioning
confidence: 99%