IT portfolio management Quantitative IT portfolio management Volatility benchmark IT dashboard Requirements metric Requirements creep Scope creep Requirements scrap Requirements churn Compound monthly growth rate Volatility tolerance factor π-ratio ρ-ratio Requirements volatility dashboard a b s t r a c t In an organization operating in the bancassurance sector we identified a low-risk IT subportfolio of 84 IT projects comprising together 16,500 function points, each project varying in size and duration, for which we were able to quantify its requirements volatility. This representative portfolio stems from a much larger portfolio of IT projects. We calculated the volatility from the function point countings that were available to us. These figures were aggregated into a requirements volatility benchmark. We found that maximum requirements volatility rates depend on size and duration, which refutes currently known industrial averages. For instance, a monthly growth rate of 5% is considered a critical failure factor, but in our low-risk portfolio we found more than 21% of successful projects with a volatility larger than 5%. We proposed a mathematical model taking size and duration into account that provides a maximum healthy volatility rate that is more in line with the reality of low-risk IT portfolios. Based on the model, we proposed a tolerance factor expressing the maximal volatility tolerance for a project or portfolio. For a low-risk portfolio its empirically found tolerance is apparently acceptable, and values exceeding this tolerance are used to trigger IT decision makers. We derived two volatility ratios from this model, the π-ratio and the ρ-ratio. These ratios express how close the volatility of a project has approached the danger zone when requirements volatility reaches a critical failure rate. The volatility data of a governmental IT portfolio were juxtaposed to our bancassurance benchmark, immediately exposing a problematic project, which was corroborated by its actual failure. When function points are less common, e.g. in the embedded industry, we used daily source code size measures and illustrated how to govern the volatility of a software product line of a hardware manufacturer. With the three real-world portfolios we illustrated that our results serve the purpose of an early warning system for projects that are bound to fail due to excessive volatility. Moreover, we developed essential requirements volatility metrics that belong on an IT governance dashboard and presented such a volatility dashboard.
A statistical method is proposed for quantifying the impact of factors that influence the quality of the estimation of costs for IT-enabled business projects. We call these factors risk drivers as they influence the risk of the misestimation of project costs. The method can effortlessly be transposed for usage on other important IT key performance indicators (KPIs), such as schedule misestimation or functionality underdelivery. We used logistic regression as a modeling technique to estimate the quantitative impact of risk factors. We did so because logistic regression has been applied successfully in fields including medical science, e.g. in perinatal epidemiology, to answer questions that show a striking resemblance to the questions regarding project risk management. In our study we used data from a large organization in the financial services industry to assess the applicability of logistic modeling in quantifying IT risks. With this real-world example we illustrated how to scrutinize the quality and plausibility of the available data. We explained how to deal with factors that cannot be influenced, also called risk factors, by project management before or in the early stage of a project, but can have an influence on the outcome of the estimation process. We demonstrated how to select the risk drivers using logistic regression. Our research has shown that it is possible to properly quantify these risks, even with the help of crude data. We discussed the interpretation of the models found and showed that the findings are helpful in decision making on measures to be taken to identify potential misestimates and thus mitigate IT risks for individual projects. We proposed increasing the auditing process efficiency by using the found cost misestimation models to classify all projects as either risky projects or non-risky projects. We discovered through our analyses that projects must not be overstaffed and the ratio of external developers must be kept small to obtain better cost estimates. Our research showed that business units that report on financial information tend to be risk mitigating, because they have more cost underruns in comparison with business units without reporting; the latter have more cost overruns. We also discovered a maturity mismatch: an increase from CMM level 1 to 2 did not influence the disparity between a cost estimate and its actual if the maturity of the business is not also increased.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.