Adoption of advanced automated SE (ASE) tools would be more favored if a business case could be made that these tools are more valuable than alternate methods. In theory, software prediction models can be used to make that case. In practice, this is complicated by the "local tuning" problem. Normally, predictors for software effort and defects and threat use local data to tune their predictions. Such local tuning data is often unavailable.This paper shows that assessing the relative merits of different SE methods need not require precise local tunings. STAR1 is a simulated annealer plus a Bayesian post-processor that explores the space of possible local tunings within software prediction models. STAR1 ranks project decisions by their effects on effort and defects and threats. In experiments with NASA systems, STAR1 found one project where ASE were essential for minimizing effort/ defect/ threats; and another project were ASE tools were merely optional.
Abstract. Most process models calibrate their internal settings using local data. Collecting this data is expensive, tedious, and often an incomplete process. Is it possible to make accurate process decisions without historical data? Variability in model output arises from (a) uncertainty in model inputs and (b) uncertainty in the internal parameters that control the conversion of inputs to outputs. We find that, for USC family process models such as COCOMO and COQUALMO, we can control model outputs by using an AI search engine to adjust the controllable project choices without requiring local tuning. For example, in ten case studies, we show that the estimates generated in this manner are very similar to those produced by traditional methods (local calibration). Our conclusion is that, (a) while local tuning is always the preferred option, there exist some process models for which local tuning is optional; and (b) when building a process model, we should design it such that it is possible to use it without tuning. Word length: 6525 words (4875 words of text + 7 figures at 250 words per figure).
"Faster, Better, Cheaper" (FBC) was a development philosophy adopted by the NASA administration in the mid to late 1990s. that lead to some some dramatic successes such as Mars Pathfinder as well as a number highly publicized mission failures, such as the Mars Climate Orbiter & Polar Lander.The general consensus on FBC was "Faster, Better, Cheaper? Pick any two". According to that view, is impossibly to optimize on all three criteria without compromising the third. This paper checks that view using an AI search tool called STAR. We show that FBC is indeed feasible and produces similar or better results when compared to other methods However, for FBC to work, there must be a balanced concern and concentration on the quality aspects of a project. If not, "FBC" becomes "CF" (cheaper and faster) with the inevitable lose in project quality.
Models of software projects input project details and output predictions via their internal tunings. The output predictions, therefore, are affected by variance in the project details P and variance in the internal tunings T. Local data is often used to constrain the internal tunings (reducing T).While constraining internal tunings with local data is always the preferred option, there exist some models for which constraining tuning is optional. We show empirically that, for the USC COCOMO family of models, the effects of P dominate the effects of T i.e. the output variance of these models can be controlled without using local data to constrain the tuning variance (in ten case studies, we show that the estimates generated by only constraining P are very similar to those produced by constraining T with historical data).We conclude that, if possible, models should be designed such that the effects of the project options dominate the effects of the tuning options. Such models can be used for the purposes of decision making without elaborate, tedious, and time‐consuming data collection from the local domain. Copyright © 2009 John Wiley & Sons, Ltd.
How can we best find project changes that most improve project estimates? Prior solutions to this problem required the use of standard software process models that may not be relevant to some new project. Also, those prior solutions suffered from limited verification (the only way to assess the results of those studies was to run the recommendations back through the standard process models). Combining case-based reasoning and contrast set learning, the W system requires no underlying model. Hence, it is widely applicable (since there is no need for data to conform to some software process models). Also, W’s results can be verified (using holdout sets). For example, in the experiments reported here, W found changes to projects that greatly reduced estimate median and variance by up to 95% and 83% (respectively)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.