The efficiency and performance of a software application largely depend on the testing strategy adopted by the firm. Apart from the tools, techniques, and skills used for testing, the duration also plays an important influence in establishing software reliability. This defines the operational performance of the software. The testing duration decision is dependent on the failure behavior depicted by the software during the process of testing and the cost spent at various phases of development. In this paper, we study the multi-release model whose fault removal process is affected by random irregular fluctuations (white noise) and error generation phenomenon and determine the optimal time of testing for a multi-release-based software. The fault removal process is governed by the phenomenon of testing coverage which is affected by random fluctuations. The model shows encouraging results as it handles the stochastic property of the fault detection process. The optimal testing time is determined with the goal to minimize the expected development cost related to software while achieving the desired reliability levels of software for that release using a Genetic Algorithm. A real-life four-release fault dataset of tandem computers has been used to numerically demonstrate the methodology. It is observed through sensitivity analysis that the presence of white noise directly affects the cost and optimal testing duration. The potential to improve sensitivity, flexibility, early detection, discovering unsuspected patterns, and boost fault diagnosis is enhanced by collecting irregular fluctuations in the fault detection rate. This technology, by going beyond existing methodologies, has distinct advantages for detecting faults and can contribute to more dependable and efficient systems in a variety of domains.