Proceedings of the 5th International Workshop on Requirements Engineering and Testing 2018
DOI: 10.1145/3195538.3195540
|View full text |Cite
|
Sign up to set email alerts
|

Cluster-based test scheduling strategies using semantic relationships between test specifications

Abstract: One of the challenging issues in improving the test efficiency is that of achieving a balance between testing goals and testing resources. Test execution scheduling is one way of saving time and budget, where a set of test cases are grouped and tested at the same time. To have an optimal test execution schedule, all related information of a test case (e.g. execution time, functionality to be tested, dependency and similarity with other test cases) need to be analyzed. Test scheduling problem becomes more compl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 10 publications
0
7
0
Order By: Relevance
“…Similarly, Haidry and Miller [22] prioritized functionally-dependent test cases based on different forms of the graph coverage values. In order to detect functional dependencies among integration test cases at an early stage, our earlier work [23] proposed, using natural language processing (NLP), to analyze multiple related artefacts (test specification, software requirement specification and relevant signaling information between functions under test).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, Haidry and Miller [22] prioritized functionally-dependent test cases based on different forms of the graph coverage values. In order to detect functional dependencies among integration test cases at an early stage, our earlier work [23] proposed, using natural language processing (NLP), to analyze multiple related artefacts (test specification, software requirement specification and relevant signaling information between functions under test).…”
Section: Related Workmentioning
confidence: 99%
“…Analyzing a wide range of test specifications which are written by human with a variance in language and testing skills makes the problem more complex. The problem of dependency detection between manual integration test cases has previously been solved through proposing several approaches such as a questionnaire based study, deep learning, natural language processing (NLP) and machine learning [6], [23], [32]. We also proposed an aiding tool called ESPRET 1 for estimating the execution time for manual integration test cases before the first execution [33].…”
Section: Proposed Approachmentioning
confidence: 99%
“…In practice, these two approaches should not significantly change the resulting diversity [5], and would only affect which tests are included first in the subset. Alternatively, other approaches use the information to cluster and then select tests [15,7]. The result of Step 3 is a diverse subset of tests T ∈ T .…”
Section: A Diversity-based Test Optimisationmentioning
confidence: 99%
“…In parallel, several studies investigate the application of diversity-based in different domains, such as model-based testing [8,12,13], continuous integration pipelines [14,2], search-based test generation [18] and at higher levels of testing such as acceptance [23], system [15] and integration [14]. Alternatively, studies also focus on investigating the tradeoff when using different distance functions in distinct sources of diversity such as use cases [5], or modified artefacts [13].…”
Section: B Related Workmentioning
confidence: 99%
See 1 more Smart Citation