Software requirements speci cations (SRS) are often validated manually. One such process is inspection, in which several reviewers independently analyze all or part of the speci cation and search for faults. These faults are then collected at a meeting of the reviewers and author(s). Usually, reviewers use Ad Hoc or Checklist methods to uncover faults. These methods force all reviewers to rely on nonsystematic techniques to search for a wide variety of faults. We hypothesize that a Scenario-based method, in which each reviewer uses di erent, systematic techniques to search for di erent, speci c classes of faults, will have a signi cantly higher success rate. We evaluated this hypothesis using a 3 2 4 partial factorial, randomized experimental design. Forty eight graduate students in computer science participated in the experiment. They were assembled into sixteen, three-person teams. Each team inspected two SRS using some combination of Ad Hoc, Checklist or Scenario methods. For each inspection we performed four measurements: (1) individual fault detection rate, (2) team fault detection rate, (3) percentage of faults rst identi ed at the collection meeting (meeting gain rate), and (4) percentage of faults rst identi ed by an individual, but never reported at the collection meeting (meeting loss rate). The experimental results are that (1) the Scenario method had a higher fault detection rate than either Ad Hoc or Checklist methods, (2) Scenario reviewers were more e ective at detecting the faults their scenarios are designed to uncover, and were no less e ective at detecting other faults than both Ad Hoc or Checklist reviewers, (3) Checklist reviewers were no more e ective than Ad Hoc reviewers, and (4) Collection meetings produced no net improvement in the fault detection rate { meeting gains were o set by meeting losses.
Empirical study play a fundamental role in modern science, helping us understand how and why things work, and allowing us to use this understanding to materially alter our world. Defining and executing studies that change how software development is done is the greatest challenge facing empirical researchers. The key to meeting this challenge lies in understanding what empirical studies really are and how they can be most effectively used -not in new techniques or more intricate statistics. If we want empirical studies to improve software engineering research and practice, then we need to create better studies and we need to draw more credible conclusions from them. Concrete steps we can take today include: designing better studies, collecting data more effectively, and involving others in our empirical enterprises. The AuthorsProfessor Dewayne E. Perry is currently the Motorola Regents Chair of Software Engineering at The University of Texas at Austin. The first half of his computing career was spent as a professional programmer, with the latter part combining both research (as a visiting faculty member in Computer Science at Carnegie-Mellon University) and consulting in software architecture and design. The last 16 years were spent doing software engineering research at Bell Laboratories in Murray Hill NJ. His appointment at UT Austin began January 2000. His research interests (in the context software system evolution) empirical studies, formal models of the software processes, process and product support environments, software architecture, and the practical use of formal specifications and techniques. He is particularly interested in the role architecture plays in the coordination of multi-site software development as well as its role in capitalizing on company software assets in the context of product lines. His educational interests at UT include building a great software engineering program both at the graduate and undergraduate levels, creating a software engineering research center, and focusing on the empirical aspects of software engineering to create a mature and rigorous empirical software engineering discipline. He is a Co-Editor in Chief of Wiley's Software Process: Improvement & Practice; an former associate editor of IEEE Transactions on Software Engineering; a member of ACM SIGSOFT and IEEE Computer Society; and has served as organizing chair, program chair and program committee member on various software engineering conferences. From 1979 through 1998, he was both a member of technical staff and a manager at Bell Labs, Lucent Technology Inc., working and managing development groups in switching and computer products. He was a founding member of the Software Production Research Department at Naperville, Illinois where he pursued research interest ~to understand how to measure, model, and do credible empirical studies with large and complex software developments. In 1999, he retired from Bell Labs, started his own consulting company (Brincos, Inc.), and joined Motorola, Inc. to build high availa...
ÐSoftware design patterns package proven solutions to recurring design problems in a form that simplifies reuse. We are seeking empirical evidence whether using design patterns is beneficial. In particular, one may prefer using a design pattern even if the actual design problem is simpler than that solved by the pattern, i.e., if not all of the functionality offered by the pattern is actually required. Our experiment investigates software maintenance scenarios that employ various design patterns and compares designs with patterns to simpler alternatives. The subjects were professional software engineers. In most of our nine maintenance tasks, we found positive effects from using a design pattern: Either its inherent additional flexibility was achieved without requiring more maintenance time or maintenance time was reduced compared to the simpler alternative. In a few cases, we found negative effects: The alternative solution was less error-prone or required less maintenance time. Although most of these effects were expected, a few were surprising: A negative effect occurs although a certain application of the Observer pattern appears to be well justified and a positive effect occurs despite superfluous flexibility (and, hence, complexity) introduced by a certain application of the Decorator pattern. Overall, we conclude that, unless there is a clear reason to prefer the simpler solution, it is probably wise to choose the flexibility provided by the design pattern because unexpected new requirements often appear. We identify several questions for future empirical research.
C omputers have become indispensable to scientific research. They are essential for collecting and analyzing experimental data, and they have largely replaced pencil and paper as the theorist's main tool. Computers let theorists extend their studies of physical, chemical, and biological systems by solving difficult nonlinear problems in magnetohydrodynamics; atomic, molecular, and nuclear structure; fluid turbulence; shock hydrodynamics; and cosmological structure formation. Beyond such well-established aids to theorists and experimenters, the exponential growth of computer power is now launching the new field of computational science. Multidisciplinary computational teams are beginning to develop large-scale predictive simulations of highly complex technical problems. Large-scale codes have been created to simulate, with unprecedented fidelity, phenomena such as supernova explosions (see figures 1 and 2), inertialconfinement fusion, nuclear explosions (see the box on page 38), asteroid impacts (figure 3), and the effect of space weather on Earth's magnetosphere (figure 4). Computational simulation has the potential to join theory and experiment as a third powerful research methodology. Although, as figures 1-4 show, the new discipline is already yielding important and exciting results, it is also becoming all too clear that much of computational science is still troublingly immature. We point out three distinct challenges that computational science must meet if it is to fulfill its potential and take its place as a fully mature partner of theory and experiment: the performance challenge-producing high-performance computers, the programming challenge-programming for complex computers, and the prediction challenge-developing truly predictive complex application codes. The performance challenge requires that the exponential growth of computer performance continue, yielding ever larger memories and faster processing. The programming challenge involves the writing of codes that can efficiently exploit the capacities of the increasingly complex computers. The prediction challenge is to use all that computing power to provide answers reliable enough to form the basis for important decisions. The performance challenge is being met, at least for the next 10 years. Processor speed continues to increase, and massive parallelization is augmenting that speed, albeit at the cost of increasingly complex computer architectures. Massively parallel computers with thousands of processors are becoming widely available at relatively low cost, and larger ones are being developed. Much remains to be done to meet the programming challenge. But computer scientists are beginning to develop languages and software tools to facilitate programming for massively parallel computers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.