Proceedings of the 4th International Workshop on Predictor Models in Software Engineering 2008
DOI: 10.1145/1370788.1370794
|View full text |Cite
|
Sign up to set email alerts
|

Adapting a fault prediction model to allow inter languagereuse

Abstract: An important step in predicting error prone modules in a project is to construct the prediction model by using training data of that project, but the resulting prediction model depends on the training data. Therefore it is difficult to apply the model to other projects. The training data consists of metrics data and bug data, and these data should be prepared for each project. Metrics data can be computed by using metric tools, but it is not so easy to collect bug data. In this paper, we try to reuse the gener… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
79
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 111 publications
(79 citation statements)
references
References 21 publications
0
79
0
Order By: Relevance
“…Since the performance of CPDP is usually very poor [59], researchers have proposed various techniques to improve CPDP [29,37,51,54].…”
Section: Background and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Since the performance of CPDP is usually very poor [59], researchers have proposed various techniques to improve CPDP [29,37,51,54].…”
Section: Background and Related Workmentioning
confidence: 99%
“…The metric compensation transforms a target dataset similar to a source dataset by using the average metric values [54]. To evaluate the performance of the metric compensation, Watanabe et al collected two defect datasets with the same metric set (8 object-oriented metrics) from two software projects and then conducted CPDP [54].…”
Section: Background and Related Workmentioning
confidence: 99%
“…Thus, change data collection requires analysis of the software's project history. However in many cases, this data may either be unavailable or difficult to collect [1][2][3]. This limits the applicability of change prediction models for those software projects where the local data is scarce in nature.…”
Section: Introductionmentioning
confidence: 99%
“…They also investigate a new group of bio-inspired techniques better known as Artificial Immune System (AIS) algorithms for their ability to detect change prone classes using inter project validation. This paper investigates the following research questions: (1) Can the prediction model of one software project be effectively used to determine change prone classes of another, i.e. is inter project validation feasible?…”
Section: Introductionmentioning
confidence: 99%
“…A potential way of predicting faults for projects of these companies without historical fault data is to make use of these public and open source projects as training data. Cross-project fault prediction refers to predicting faults in a project using prediction models trained from historical data of other projects [9,10]. There are some studies focusing on this issue and their results show that cross-project fault prediction is still a serious challenge [9,11].For machine learning based predictions, the effect of a predictor depends on two factors: the training data and the learning algorithm.…”
Section: Introductionmentioning
confidence: 99%