2019 IEEE International Conference on Software Maintenance and Evolution (ICSME) 2019
DOI: 10.1109/icsme.2019.00021
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Anti-Patterns from Code Metrics History

Abstract: Anti-patterns are poor solutions to recurring design problems. Number of empirical studies have highlighted the negative impact of anti-patterns on software maintenance which motivated the development of various detection techniques. Most of these approaches rely on structural metrics of software systems to identify affected components while others exploit historical information by analyzing co-changes occurring between code components. By relying solely on one aspect of software systems (i.e., structural or h… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(15 citation statements)
references
References 31 publications
0
13
0
2
Order By: Relevance
“…We did not include historical indicators in our code characteristics in this iteration, as we derived our model by examining two smells, Large Class and Long Method. Though historic indicators can be used to detect Large Classes (Barbez et al, 2019), we did not find that any code smell specification, heuristics or annotator reasoning provided in the analyzed literature analyze historical factors when deciding on whether the class suffers from this smell. Furthermore, Moha et al (2010) showed that their domain analysis that did not include historic indicators is complete enough to describe a whole range of smells.…”
Section: Generalizability Of the Proposed Annotation Modelmentioning
confidence: 87%
“…We did not include historical indicators in our code characteristics in this iteration, as we derived our model by examining two smells, Large Class and Long Method. Though historic indicators can be used to detect Large Classes (Barbez et al, 2019), we did not find that any code smell specification, heuristics or annotator reasoning provided in the analyzed literature analyze historical factors when deciding on whether the class suffers from this smell. Furthermore, Moha et al (2010) showed that their domain analysis that did not include historic indicators is complete enough to describe a whole range of smells.…”
Section: Generalizability Of the Proposed Annotation Modelmentioning
confidence: 87%
“…We also looked for the tools that extract the methods through static analysis (i.e., they do not require compiled code) to avoid compilation issues. We selected the CK Tool (Aniche, 2015) and RepositoryMiner (Barbez et al, 2019) for metric extraction based on these requirements.…”
Section: Heuristic-based Detectionmentioning
confidence: 99%
“…Such embeddings are helpful for the classification of security-relevant commits (Sabetta & Bezzi, 2018;Lozoya et al, 2021), log message generation, bug fixing patch identification, and just-in-time defect prediction (Hoang et al, 2020). We did not consider this group of embeddings for code smell detection, although code change history can be helpful for God Class detection (Barbez et al, 2019;Palomba et al, 2014). The MLCQ dataset contains samples from 792 projects and extracting the needed embeddings would be extremely time intensive.…”
Section: Applying Source Code Embeddings In Software Engineering Tasksmentioning
confidence: 99%
“…Obeležja koja se ekstrahuju iz koda su strukturne metrike i istorijske metrike (vrednosti strukturnih metrika u n prethodnih commit-ova), što predstavlja sličnost sa pristupom opisanim u ovom radu. Autori rada [7] predstavili su CAME 2 (Convolutional Analysis of code Metrics Evolution), pristup zasnovan na dubokom učenju (deep-learning) koji se oslanja na strukturne i istorijske vrednosti metrika u cilju detektovanja code smell-ova. Za merenje performansi je upotrebljena F-mera, kao i u ovom radu.…”
Section: Slika 1 Ulaz U Transformer Je Sekvenca Od Više Tokena Gde unclassified
“…U radu je rečeno da je prosečna dužina sekvence između 3 i 5. U svim navedenim radovima, sem u radu [7] gde je korišćen cross-project 4 , korišćen je within-project 5 pristup evaluacije modela. Pored toga, u svim radovima je korišćena F-mera za evaluaciju modela.…”
Section: Slika 1 Ulaz U Transformer Je Sekvenca Od Više Tokena Gde unclassified