In this work, a case series consisting of mandibular reconstruction with free fibula flap in ameloblastic carcinoma, pathological mandibular fracture and recontouring of mandibular angle hyperplasia that were treated successfully using fast and economical in-house virtual planning and 3D-printing protocol has been presented. Pre-operatively, the design of the reconstructed mandibular model and surgical templates were carried out, with the help of two types of free software. As the next step, all designed 3D hardware tools were printed using affordable fused deposition modeling desktop 3D printer. A 3D-printed reconstructed mandibular model was used for titanium plate bending. Our findings have illustrated that it necessitates an average of 5 h 29 min per case from virtual planning stage until the 3D printing of all 3D hardware tools is completed. The average cost for 3D-printed hardware tools and titanium plate per case is only $203.42.
Source code clones are common in software development as part of reuse practice. However, they are also often a source of errors compromising software maintainability. The existing work on code clone detection mainly focuses on clones in a single programming language. However, nowadays software is increasingly developed on a multilanguage platform on which code is reused across different programming languages. Detecting code clones in such a platform is challenging and has not been studied much. In this paper, we present CLCD-I, a deep neural network-based approach for detecting cross-language code clones by using InferCode which is an embedding technique for source code. The design of our model is twofold: (a) taking as input InferCode embeddings of source code in two different programming languages and (b) forwarding them to a Siamese architecture for comparative processing. We compare the performance of CLCD-I with LSTM autoencoders and the existing approaches on cross-language code clone detection. The evaluation shows the CLCD-I outperforms LSTM autoencoders by 30% on average and the existing approaches by 15% on average.
Software clones are beneficial to detect security gaps and software maintenance in one programming language or across multiple languages. The existing work on source clone detection performs well but in a single programming language. However, if a piece of code with the same functionality is written in different programming languages, detecting it is harder as different programming languages have a different lexical structure. Moreover, most existing work rely on manual feature engineering. In this paper, we propose a deep neural network model based on source code AST embeddings to detect cross-language clones in an end-to-end fashion of the source code without the need of the manual process to pinpoint similar features across different programming languages. To overcome data shortage and reduce overfitting, a Siamese architecture is employed. The design methodology of our model is twofold -(a) it accepts AST embeddings as input for two different programming languages, and (b) it uses a deep neural network to learn abstract features from these embeddings to improve the accuracy of cross-language clone detection. The early evaluation of the model observes an average precision, recall and F-measure score of 0.99, 0.59 and 0.80 respectively, which indicates that our model outperforms all available models in cross-language clone detection.Keywords deep neural networks • cross language code clone detection • abstract syntax trees IntroductionCode clone (CL) detection is the process of detecting similar code fragments. Some code fragments are copied online without realizing potential negative effects (e.g., security threats, increasing complexity). Detecting such clones is easier if the code snippets are in the same programming language [1,2,3,4,5,6]. Duplicate functionalities are common in a large software system where the software is often written in multiple programming languages. If one functionality is to be updated or removed, this has to be reflected on all clones. On one hand, the latest approach by Perez and Chiba [4] on cross-language clones detection relies on skip-gram models over AST. However, it ignores the morphology of tokens, which impairs the accuracy of detection. Furthermore, it is not clear how their model can capture the trees that greatly differ syntactically. For example, although the ASTs of the code snippets in Fig. 2 look completely different as they are written in different programming languages, they have the same functionality. So it becomes difficult for an AST-based model to recognize such a clone as reported in their evaluation. This shows that a careful selection of source code embedding technique is critical for accuracy as we will discuss how different embeddings affect F1 score.Detecting code clones has long been an active area of research [7] with various approaches proposed including token, AST, metrics, binary and graph-based approaches [8,3,9,10,11,12]. Most existing work focuses on clone detection in the same programming language. However, common APIs (e.g., Apache Spark) for b...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.