2020 IEEE International Conference on Software Maintenance and Evolution (ICSME) 2020
DOI: 10.1109/icsme46990.2020.00026
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Semantic Conflicts via Automated Behavior Change Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 34 publications
0
10
0
Order By: Relevance
“…Another direction would be to consider predicting unwanted feature interactions [13], which are special kinds of bugs that, when taken into account, may improve the predictive ability of defect prediction techniques, as well as provide meaningful insights regarding whether some machine learning classifiers perform better than others on specific kinds of bugs. Here, we could apply techniques for identifying variability-aware bugs [79], or automatically generate test cases for features and use them as partial specifications [80] to identify unwanted feature behavior.…”
Section: Discussionmentioning
confidence: 99%
“…Another direction would be to consider predicting unwanted feature interactions [13], which are special kinds of bugs that, when taken into account, may improve the predictive ability of defect prediction techniques, as well as provide meaningful insights regarding whether some machine learning classifiers perform better than others on specific kinds of bugs. Here, we could apply techniques for identifying variability-aware bugs [79], or automatically generate test cases for features and use them as partial specifications [80] to identify unwanted feature behavior.…”
Section: Discussionmentioning
confidence: 99%
“…code because of "fear of merge conflict." In relation to this conjecture, several studies have reported that merging diverged code between repositories is very laborious as a result of merge conflicts (Stanciulescu et al 2015;Brun et al 2011;de Souza et al 2003;Perry et al 2001;Sousa et al 2018;Mahmood et al 2020;Silva et al 2020). To this end, it would be interesting for future research to interview the developers of our forks (and further forks) to determine whether the lack of support for cherry picking bug fixes or specific functionality does indeed contribute to the lack of code propagation.…”
Section: Implications For Integration Support Toolsmentioning
confidence: 92%
“…We found in our evaluation that test suites have limited coverage of dependencies, thus RTS may not be able to find tests relevant for changes in dependencies or have enough test data to build a prediction model for average GitHub projects. Finally, Danglot et al [79] and Da Silva et al [80] investigate the use of search-based methods such as test amplification and automated test generation for detecting semantically conflicting changes. Although search-based methods are effective in reducing false positives and to some degree eliminating false negatives present in static analysis, they are limiting for integration test scenarios such as automated dependency updating.…”
Section: Related Workmentioning
confidence: 99%
“…Although search-based methods are effective in reducing false positives and to some degree eliminating false negatives present in static analysis, they are limiting for integration test scenarios such as automated dependency updating. Da Silva et al [80] found that automated test generation such as Evo-Suite [58] have difficulties in generating effective tests for complex objects with internal or external dependencies.…”
Section: Related Workmentioning
confidence: 99%