Machine Learning (ML), including Deep Learning (DL), systems, i.e., those with ML capabilities, are pervasive in today's data-driven society. Such systems are complex; they are comprised of ML models and many subsystems that support learning processes. As with other complex systems, ML systems are prone to classic technical debt issues, especially when such systems are long-lived, but they also exhibit debt specific to these systems. Unfortunately, there is a gap of knowledge in how ML systems actually evolve and are maintained. In this paper, we fill this gap by studying refactorings, i.e., source-to-source semanticspreserving program transformations, performed in real-world, open-source software, and the technical debt issues they alleviate. We analyzed 26 projects, consisting of 4.2 MLOC, along with 327 manually examined code patches. The results indicate that developers refactor these systems for a variety of reasons, both specific and tangential to ML, some refactorings correspond to established technical debt categories, while others do not, and code duplication is a major crosscutting theme that particularly involved ML configuration and model code, which was also the most refactored. We also introduce 14 and 7 new ML-specific refactorings and technical debt categories, respectively, and put forth several recommendations, best practices, and anti-patterns. The results can potentially assist practitioners, tool developers, and educators in facilitating long-term ML system usefulness.Index Terms-empirical studies, refactoring, machine learning systems, technical debt, software repository mining I. In t r o d u c t io nIn the big data era, Machine Learning (ML), including Deep Learning (DL), systems are pervasive in modern society. Central to these systems are dynamic ML models, whose behavior is ultimately defined by their input data. However, such systems do not only consist of ML models; instead, ML systems typically encompass complex subsystems that support ML processes [1]. ML systems-like other long-lived, complex systems-are prone to classic technical debt [2 ] issues; yet, they also exhibit debt specific to such systems [3]. While work exist on applying software engineering (SE) rigor to ML systems [4]- [12], there is generally a gap of knowledge in how ML systems actually evolve and are maintained. As ML systems become more difficult and expensive to maintain [1 ], understanding the kinds of modifications developers are required to make to such systems-our overarching research question-is of the utmost importance.To fill this gap, we performed an empirical study on common refactorings, i.e., source-to-source semantics preserving program transformations-a widely accepted mechanism for effectively reducing technical debt [13]-[16]-in real-world, open-source ML systems. We set out to discover (i) the kinds of refactorings-both specific and tangential to ML-performed, (ii) whether particular refactorings occurred more often in model code vs. other supporting subsystems, (iii) the types of technical debt ...
Social media has become an important method for information sharing. This has also created opportunities for bad actors to easily spread disinformation and manipulate public opinion. This paper explores the possibility of applying Authorship Verification on online communities to mitigate abuse by analyzing the writing style of online accounts to identify accounts managed by the same person. We expand on our similarity-based authorship verification approach, previously applied on large fanfictions, and show that it works in open-world settings, shorter documents, and is largely topic-agnostic. Our expanded model can link Reddit accounts based on the writing style of only 40 comments with an AUC of 0.95, and the performance increases to 0.98 given more content. We apply this model on a set of suspicious Reddit accounts associated with the disinformation campaign surrounding the 2016 U.S. presidential election and show that the writing style of these accounts are inconsistent, indicating that each account was likely maintained by multiple individuals. We also apply this model to Reddit user accounts that commented on the WallStreetBets subreddit around the 2021 GameStop short squeeze and show that a number of account pairs share very similar writing styles. We also show that this approach can link accounts across Reddit and Twitter with an AUC of 0.91 even when training data is very limited.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.