2015
DOI: 10.1049/iet-sen.2014.0169
|View full text |Cite
|
Sign up to set email alerts
|

Model to estimate the software development effort based on in‐depth analysis of project attributes

Abstract: Over the past years, numerous models have been proposed to estimate the development effort in the early stages of a software project. The existing models have mostly relied on soft computing techniques and weighting methods. Although they have reduced the complexity and vagueness of software project attributes, attempts are ongoing to develop more accurate and reliable estimation models. This paper is concentrated on selective classification of software projects based on underlying attributes to localise the d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(12 citation statements)
references
References 52 publications
0
12
0
Order By: Relevance
“…There are various similarity functions in the ABE methods, for example, Euclidean (EUC) similarity function, 5,13,14, Manhattan (MHT) similarity function, 7,18,57,58,[60][61][62][63][64][65][66] maximum distance similarity function, 8,67,68 Minkowski (MKS) similarity function, 46,57 fuzzy, 69,70 and optimized induced learning (OIL). 71 The performance of the different similarity functions may vary based on the type of features (numerical, ordinal, or nominal) and the distribution of data samples in N-dimensional feature space.…”
Section: Evaluation Structures In Abe Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…There are various similarity functions in the ABE methods, for example, Euclidean (EUC) similarity function, 5,13,14, Manhattan (MHT) similarity function, 7,18,57,58,[60][61][62][63][64][65][66] maximum distance similarity function, 8,67,68 Minkowski (MKS) similarity function, 46,57 fuzzy, 69,70 and optimized induced learning (OIL). 71 The performance of the different similarity functions may vary based on the type of features (numerical, ordinal, or nominal) and the distribution of data samples in N-dimensional feature space.…”
Section: Evaluation Structures In Abe Techniquesmentioning
confidence: 99%
“…7,14,20,37,40,50 In the ABE methods, the effort of a new project can be estimated based on the efforts of K similar projects utilizing an adaptation function. In the majority of the previous ABE methods, mean, 5,14,[36][37][38][39][40][41][42][43][44][45][46][47][49][50][51][52][53][54]56,57,64,68 median, 8,67 and inverse rank weighted mean 7,18,48,51,55,62,63,66 are the mostly used adaptation functions.…”
Section: Evaluation Structures In Abe Techniquesmentioning
confidence: 99%
“…There are many software sizing methods viz. function points, use case points, story points (for agile projects), object based count [3]- [6] etc. but IFPUG function point is the most widely used and acceptable software sizing method.…”
Section: Definition 41 Software Sizementioning
confidence: 99%
“…Khatibi et al have proposed the similar model as described by Bardsiri and Khatibi but with some modification. Khatibi et al have used ANOVA and ANCOVA to determine the weight of each project attribute.…”
Section: Overview Of Selected Studiesmentioning
confidence: 99%
“…They used firefly to optimize the values of the coefficients of basic forms of COCOMO 81 and COCOMO II models. Their study proves that firefly algorithm outperforms the other optimization algorithms' results in much less number of iterations on a NASA data set.Khatibi et al52 have proposed the similar model as described by Bardsiri and Khatibi46 but with some modification. Khatibi et al have used ANOVA and ANCOVA to determine the weight of each project attribute.…”
mentioning
confidence: 99%