2013 35th International Conference on Software Engineering (ICSE) 2013
DOI: 10.1109/icse.2013.6606705
|View full text |Cite
|
Sign up to set email alerts
|

A large scale Linux-Kernel based benchmark for feature location research

Abstract: Many software maintenance tasks require locating code units that implement a certain feature (termed as feature location). Feature location has been an active research area for more than two decades. However, there is lack of publicly available , large scale benchmarks for evaluating and comparing feature location approaches. In this paper, we present a LinuxKe rnel based benchmark for feature location research . This be nchmark is large scale and extensible . By providing rich feature and program information … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2016
2016
2018
2018

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 8 publications
0
7
0
Order By: Relevance
“…The input for the feature location task presented in Section 4 might be considered few input information if we compare it with concern location in maintenance tasks where it is a common practice to trace bug reports with the names and comments in the source code. However, in similar cases to our context of feature-based variants (e.g., the Linux-Kernel benchmark for feature location [26]), we can see that, similarly to EFLBench, only feature names and descriptions are used as input. In this benchmark the feature location task is at a granularity of classes or code fragments, however in our case, it is at the coarse granularity of plugins where we only provide the plugin names as input.…”
Section: Limitations and Threats To Validitymentioning
confidence: 77%
See 2 more Smart Citations
“…The input for the feature location task presented in Section 4 might be considered few input information if we compare it with concern location in maintenance tasks where it is a common practice to trace bug reports with the names and comments in the source code. However, in similar cases to our context of feature-based variants (e.g., the Linux-Kernel benchmark for feature location [26]), we can see that, similarly to EFLBench, only feature names and descriptions are used as input. In this benchmark the feature location task is at a granularity of classes or code fragments, however in our case, it is at the coarse granularity of plugins where we only provide the plugin names as input.…”
Section: Limitations and Threats To Validitymentioning
confidence: 77%
“…This definition is very general and open to interpretation so one recurrent challenge in implementing SPLs is deciding the granularity that the features will have at the implementation level [18]. Coarse granularity (e.g., components or plugins [19,20,21,22,23,24]) makes easier the maintenance of the SPL while fine granularity (e.g., source code classes or code fragments [25,26]) might complicate the development and maintenance of the SPL. This way, there are very diverse scenarios regarding the granularity of the reusable assets in the SPLs.…”
Section: Background On Feature Location In Feature-based Variantsmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, ArgoUML variants [8] have been extensively used. However, none of the presented case studies have been proposed as a benchmark except the variants of the Linux kernel by Xing et al [32]. This benchmark considers 12 variants of the Linux kernel from which a ground truth is extracted with the traceability of 2400 features to code parts.…”
Section: Related Workmentioning
confidence: 99%
“…In the past, this kind of exploration already culminated in realistic benchmarks (e.g. Linux kernel variants [19] or Eclipse releases [15]). …”
Section: Introductionmentioning
confidence: 99%