Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Refactoring aims to improve the quality of software without altering its functional behaviors. Understanding developers’ refactoring activities is essential to improve software maintainability. The use of machine learning (ML) libraries and frameworks in software systems has significantly increased in recent years, making the maximization of their maintainability crucial. Due to the data-driven nature of ML libraries and frameworks, they often undergo a different development process compared to traditional projects. As a result, they may experience various types of refactoring, such as those related to the data. The state-of-the-art refactoring detection tools have not been tested in the ML technical domain, and they are not specifically designed to detect ML-specific refactoring types (e.g., data manipulation) in ML projects; therefore, they may not adequately find all potential refactoring operations, specifically the ML-specific refactoring operations. Furthermore, a vast number of ML libraries and frameworks are written in Python, which has limited tooling support for refactoring detection. PyRef, a rule-based and state-of-the-art tool for Python refactoring detection, can identify 11 types of refactoring operations with relatively high precision. In contrast, for other languages such as Java, state-of-the-art tools are capable of detecting a much more comprehensive list of refactorings. For example, Rminer can detect 99 types of refactoring for Java projects. Inspired by previous work that leverages commit messages to detect refactoring, we introduce MLRefScanner, a prototype tool that applies machine-learning techniques to detect refactoring commits in ML Python projects. MLRefScanner detects commits involving both ML-specific refactoring operations and additional refactoring operations beyond the scope of state-of-the-art refactoring detection tools. To demonstrate the effectiveness of our approach, we evaluate MLRefScanner on 199 ML open-source libraries and frameworks and compare MLRefScanner against other refactoring detection tools for Python projects. Our findings show that MLRefScanner outperforms existing tools in detecting refactoring-related commits, achieving an overall precision of 94% and recall of 82% for identifying refactoring-related commits. MLRefScanner can identify commits with ML-specific and additional refactoring operations compared to state-of-the-art refactoring detection tools. When combining MLRefScanner with PyRef, we can further increase the precision and recall to 95% and 99%, respectively. MLRefScanner provides a valuable contribution to the Python ML community, as it allows ML developers to detect refactoring-related commits more effectively in their ML Python projects. Our study sheds light on the promising direction of leveraging machine learning techniques to detect refactoring activities for other programming languages or technical domains where the commonly used rule-based refactoring detection approaches are not sufficient.
Refactoring aims to improve the quality of software without altering its functional behaviors. Understanding developers’ refactoring activities is essential to improve software maintainability. The use of machine learning (ML) libraries and frameworks in software systems has significantly increased in recent years, making the maximization of their maintainability crucial. Due to the data-driven nature of ML libraries and frameworks, they often undergo a different development process compared to traditional projects. As a result, they may experience various types of refactoring, such as those related to the data. The state-of-the-art refactoring detection tools have not been tested in the ML technical domain, and they are not specifically designed to detect ML-specific refactoring types (e.g., data manipulation) in ML projects; therefore, they may not adequately find all potential refactoring operations, specifically the ML-specific refactoring operations. Furthermore, a vast number of ML libraries and frameworks are written in Python, which has limited tooling support for refactoring detection. PyRef, a rule-based and state-of-the-art tool for Python refactoring detection, can identify 11 types of refactoring operations with relatively high precision. In contrast, for other languages such as Java, state-of-the-art tools are capable of detecting a much more comprehensive list of refactorings. For example, Rminer can detect 99 types of refactoring for Java projects. Inspired by previous work that leverages commit messages to detect refactoring, we introduce MLRefScanner, a prototype tool that applies machine-learning techniques to detect refactoring commits in ML Python projects. MLRefScanner detects commits involving both ML-specific refactoring operations and additional refactoring operations beyond the scope of state-of-the-art refactoring detection tools. To demonstrate the effectiveness of our approach, we evaluate MLRefScanner on 199 ML open-source libraries and frameworks and compare MLRefScanner against other refactoring detection tools for Python projects. Our findings show that MLRefScanner outperforms existing tools in detecting refactoring-related commits, achieving an overall precision of 94% and recall of 82% for identifying refactoring-related commits. MLRefScanner can identify commits with ML-specific and additional refactoring operations compared to state-of-the-art refactoring detection tools. When combining MLRefScanner with PyRef, we can further increase the precision and recall to 95% and 99%, respectively. MLRefScanner provides a valuable contribution to the Python ML community, as it allows ML developers to detect refactoring-related commits more effectively in their ML Python projects. Our study sheds light on the promising direction of leveraging machine learning techniques to detect refactoring activities for other programming languages or technical domains where the commonly used rule-based refactoring detection approaches are not sufficient.
Considering the effect of the utility harmonic impedance variations on harmonic responsibility, a method based on piecewise bound constrained optimization is proposed in this paper to evaluate the load harmonic responsibilities. The wavelet packet transform is employed to determine the change times of the utility harmonic impedances. The harmonic monitoring data is divided into several segments where the utility harmonic impedances are considered as constants. Then, the problem of harmonic responsibility assessment under utility harmonic impedance changes are settled by the piecewise bound constrained optimization model. Furthermore, the interior point, the sequential quadratic programming and the active set algorithm are respectively adopted to calculate all the instantaneous harmonic responsibilities of harmonic loads. Finally, the weighted summation is used to calculate the total harmonic responsibility. To demonstrate the validity, simulation tests are carried out on an experimental circuit and the IEEE 13-bus distribution system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.