Code review plays an important role in software quality control. A typical review process would involve a careful check of a piece of code in an attempt to find defects and other quality issues/violations. One type of issues that may impact the quality of the software is code smells -i.e., bad programming practices that may lead to defects or maintenance issues. Yet, little is known about the extent to which code smells are identified during code reviews. To investigate the concept behind code smells identified in code reviews and what actions reviewers suggest and developers take in response to the identified smells, we conducted an empirical study of code smells in code reviews using the two most active OpenStack projects (Nova and Neutron). We manually checked 19,146 review comments obtained by keywords search and random selection, and got 1,190 smell-related reviews to study the causes of code smells and actions taken against the identified smells. Our analysis found that 1) code smells were not commonly identified in code reviews, 2) smells were usually caused by violation of coding conventions, 3) reviewers usually provided constructive feedback, including fixing (refactoring) recommendations to help developers remove smells, and 4) developers generally followed those recommendations and actioned the changes. Our results suggest that 1) developers should closely follow coding conventions in their projects to avoid introducing code smells, and 2) review-based detection of code smells is perceived to be a trustworthy approach by developers, mainly because reviews are context-sensitive (as reviewers are more aware of the context of the code given that they are part of the project's development team).
While a substantial body of prior research has investigated the form and nature of production code, comparatively little attention has examined characteristics of test code, and, in particular, test smells in that code. In this paper, we explore the relationship between production code properties (at the class level) and a set of test smells, in five open source systems. Specifically, we examine whether complexity properties of a production class can be used as predictors of the presence of test smells in the associated unit test. Our results, derived from the analysis of 975 production class-unit test pairs, show that the Cyclomatic Complexity (CC) and Weighted Methods per Class (WMC) of production classes are strong indicators of the presence of smells in their associated unit tests. The Lack of Cohesion of Methods in a production class (LCOM) also appears to be a good indicator of the presence of test smells. Perhaps more importantly, all three metrics appear to be good indicators of particular test smells, especially Eager Test and Duplicated Code. The Depth of the Inheritance Tree (DIT), on the other hand, was not found to be significantly related to the incidence of test smells. The results have important implications for large-scale software development, particularly in a context where organizations are increasingly using, adopting or adapting open source code as part of their development strategy and need to ensure that classes and methods are kept as simple as possible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.