It is commonly recognized that software development is highly unpredictable and software quality may not be easily enhanced after software product is finished. During the software development life cycle (SDLC), project managers have to solve many technical and management issues, such as high failure rate, cost over-run, low quality, late delivery, etc. Consequently, in order to produce robust and reliable software product(s) on time and within budget, project managers and developers have to appropriately allocate limited development-and testing-effort and time. In the past, the distribution of testing-effort or manpower can typically be described by the Weibull or Rayleigh model. Practically, it should be noticed that development environments or methods could change due to some reasons. Thus when we plan to perform software reliability modeling and prediction, these changes or variations occurring in the development process have to be taken into consideration. In this paper, we will study how to use the Parr-curve model with multiple change-points to depict the consumption of testing-effort and how to perform further software reliability analysis. The applicability and performance of our proposed model will be demonstrated and assessed through real software failure data. Experimental results are analyzed and compared with other existing models to show that our proposed model gives better predictions.