To be relevant to the goals of an enterprise, an industrial software engineering research organization must identify problems of interest to, and find solutions that have an impact on, the software development organizations within the company. Using a systematic measurement program both to identify the problems and assess the impact of solutions is key to satisfying this need. Avaya has had such a program in place for about seven years. Every year we produce an annual report known as the State of Software in Avaya that describes software development trends throughout the company and that contains prioritized recommendations for improving Avaya's software development capabilities. We start by identifying the goals of the enterprise and use the goal-question-metric approach to identify the measures to compute. The result is insight into the enterprise's problems in software development, recommendations for improving the development process, and problems that require research to solve. We will illustrate the process with examples from the Software Technology Research Department in Avaya Labs whose purpose is to improve the state of software development and know it. "Know it" means that improvement should be subjectively evident and objectively quantifiable. "Know it" also means that one must be skilled at identifying the data sources, performing the appropriate analyses to answer the questions of interest, and validating that the data are accurate and appropriate for the purpose. Examples will include how and why we developed a measure of software quality that appeals to customers, how and why we are studying the effectiveness of distributed software development, and how and why we are helping development organizations to adopt iterative development methods. We will also discuss how we keep the company and the department apprised of the current strengths and weaknesses of software development in Avaya through the publication of the annual State Empir Software Eng (2010) 15: of Software in Avaya Report. Our purpose is both to provide a model for assessment that others may emulate, based on seven years of experience, and to spotlight analyses and conclusions that we feel are common to software development today.
As the development of software products frequently transitions among globally distributed teams, the knowledge about the source code, design decisions, original requirements, and the history of troublesome areas gets lost. A new team faces tremendous challenges to regain that knowledge. In numerous projects we observed that only 1% of project files are involved in more than 60% of the customer reported defects (CFDs), thus focusing quality improvement on such files can greatly reduce the risk of poor product quality. We describe a mostly automated approach that annotates the source code at the file and module level with the historic information from multiple version control, issue tracking, and an organization's directory systems. Risk factors (e.g, past changes and authors who left the project) are identified via a regression model and the riskiest areas undergo a structured evaluation by experts. The results are presented via a web-based tool and project experts are then trained how to use the tool in conjunction with a checklist to determine risk remediation actions for each risky file. We have deployed the approach in seven projects in Avaya and are continuing deployment to the remaining projects as we are evaluating the results of earlier deployments. The approach is particularly helpful to focus quality improvement effort for new releases of deployed products in a resource-constrained environment.
Context]With the proliferation of desktop and mobile platforms the development and maintenance of identical or similar applications on multiple platforms is urgently needed.[Goal] We study a software product deployed to more than 25 software/hardware combinations over 10 years to understand multi-platform development practices.[Method] We use semi structured interviews, project wikis, VCSs and issue tracking systems to understand and quantify these practices.[Results]We find the projects using MR cloning, MR review meeting, cross platform coordinator's role as three primary means of coordination. We find that forking code temporarily relieves the coordination needs and is driven by divergent schedule, market needs, and organizational policy. Based on our qualitative findings we propose quantitative measures of coordination, redundant work, and parallel development.[Conclusions] A model of coordination intensity suggests that it is related to the amount of paralel and redundant work. We hope that this work will provide a basis for quantitative understanding of issues faced in multi-platform software development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.