Post-release detection of a software vulnerability does not only cost a company money to fix, but also results in loss of reputation and damaging litigation. Techniques to prevent and detect vulnerabilities prior to release, therefore, are valuable. We performed empirical case studies on two large, widely-used open source projects: the Mozilla Firefox web browser and the Red Hat Enterprise Linux kernel. We investigated whether software metrics obtained early in the software development life cycle are discriminative of vulnerable code locations, and can guide actions for an organization to take for improvement of code and development team. We also investigated whether the metrics are predictive of vulnerabilities so that prediction models can prioritize validation and verification efforts. The metrics fall into three categories: complexity, code churn, and developer activity metrics. The results indicate that the metrics are discriminative and predictive of vulnerabilities. The predictive model on the three categories of metrics predicted 70.8% of the known vulnerabilities by selecting only 10.9% of the project's files. Similarly, the model for the Red Hat Enterprise Linux kernel found 68.8% of the known vulnerabilities by selecting only 13.0% of the files.
design, algorithm, code, or test-does indeed improve software quality and reduce time to market. Additionally, student and professional programmers consistently find pair programming more enjoyable than working alone. Yet most who have not tried and tested pair programming reject the idea as a redundant, wasteful use of programming resources: "Why would I put two people on a job that just one can do? I can't afford to do that!" But we have found, as Larry Constantine wrote, that "Two programmers in tandem is not redundancy; it's a direct route to greater efficiency and better quality." 1 Our supportive evidence comes from professional programmers and from advanced undergraduate students who participated in a structured experiment. The experimental results show that programming pairs develop better code faster with only a minimal increase in prerelease programmer hours. These results apply to all levels of programming skill from novice to expert. Earlier ObservationsIn 1998, Temple University professor John Nosek reported on his study of 15 full-time, experienced programmers working for a maximum of 45 minutes on a challenging problem important to their organization. In their own environments and with their own equipment, five worked individually and 10 worked collaboratively in five pairs. The conditions and materials were the same for both the experimental (team) and control (individual) groups. A twosided t-test showed that the study provided statistically significant results. Combining their time, the pairs spent 60% more minutes on the task. Because they worked in tandem, however, they completed the task 40% faster than the control groups, and produced better algorithms and code. 2 Most of the programmers were initially skeptical of the value of collaborating and
Abstract.In recent years, the use of, interest in, and controversy about Agile methodologies have realized dramatic growth. Anecdotal evidence is rising regarding the effectiveness of agile methodologies in certain environments and for specified projects. However, collection and analysis of empirical evidence of this effectiveness and classification of appropriate environments for Agile projects has not been conducted. Researchers from four institutions organized an eWorkshop to synchronously and virtually discuss and gather experiences and knowledge from eighteen Agile experts spread across the globe. These experts characterized Agile Methods and communicated experiences using these methods on small to very large teams. They discussed the importance of staffing Agile teams with highly skilled developers. They shared common success factors and identified warning signs of problems in Agile projects. These and other findings and heuristics gathered through this valuable exchange can be useful to researchers and to practitioners as they establish an experience base for better decision making. The rise of Agile MethodsPlan-driven methods are those in which work begins with the elicitation and documentation of a "complete" set of requirements, followed by architectural and high level-design development and inspection. Examples of plandriven methods include various waterfall and iterative approaches, such as the Personal Software Process (PSP) [1]. Beginning in the mid-1990's, some practitioners found these initial requirements documentation, and architecture and design development steps frustrating and, perhaps, impossible [2]. As Barry Boehm [3] suggests, these plan-driven methods may well start to pose difficulties when change rates are still relatively low. The industry and the technology move too fast and customers have become increasingly unable to definitively state their needs up front. As a result, several consultants have independently developed methodologies and practices to embrace and respond to the inevitable change they were experiencing. These methodologies and practices are based on iterative enhancement, a technique which was introduced in 1975 [4] and that has been come to be known as Agile Methodologies [2,5].Agile Methodologies are gaining popularity in industry although they comprise a mix of accepted and controversial software engineering practices. It is quite likely that the software industry will find that specific project characteristics will determine the prudence of using an agile or a plan-driven methodology -or a hybrid of the two. In recent years, there have been many stories and anecdotes [6][7][8] of industrial teams experiencing success with Agile methodologies. There is, however, an urgent need to empirically assess the applicability of these methods, in a structured manner, in order to build an experience base for better decision-making. This paper contributes to the experience base and discusses the findings of a synchronous, virtual eWorkshop in which experiences and knowledge we...
Software fails and fixing it is expensive. Research in failure prediction has been highly successful at modeling software failures. Few models, however, consider the key cause of failures in software: people. Understanding the structure of developer collaboration could explain a lot about the reliability of the final product. We examine this collaboration structure with the developer network derived from code churn information that can predict failures at the file level. We conducted a case study involving a mature Nortel networking product of over three million lines of code. Failure prediction models were developed using test and post-release failure data from two releases, then validated against a subsequent release. One model's prioritization revealed 58% of the failures in 20% of the files compared with the optimal prioritization that would have found 61% in 20% of the files, indicating that a significant correlation exists between filebased developer network metrics and failures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.