2022 IEEE 19th International Conference on Software Architecture (ICSA) 2022
DOI: 10.1109/icsa53651.2022.00023
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation Methods and Replicability of Software Architecture Research Objects

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 42 publications
0
3
0
Order By: Relevance
“…To answer our last research question and investigate which methods and which datasets are used by the 52 collected approaches during their evaluation, we adapt the evaluation method taxonomy presented by Konersmann et al (2022) for the software architecture domain. In general, not all methods are applicable to the cognitive robotics domain.…”
Section: Evaluation Methods and Benchmarkingmentioning
confidence: 99%
“…To answer our last research question and investigate which methods and which datasets are used by the 52 collected approaches during their evaluation, we adapt the evaluation method taxonomy presented by Konersmann et al (2022) for the software architecture domain. In general, not all methods are applicable to the cognitive robotics domain.…”
Section: Evaluation Methods and Benchmarkingmentioning
confidence: 99%
“…Goal G1 shall evaluate the accuracy of the analysis results of our approach, also compared with the different analysis types discussed in Section III. This represents a typical approach to design time analysis evaluation [26]. This includes evaluating the behavior of the analysis in different variants with and without uncertainty and confidentiality violations.…”
Section: A Evaluation Goals Questions and Metricsmentioning
confidence: 99%
“…To maximize construct validity, we used a Goal Question Metric plan [2] and oriented our evaluation plan to similar approaches [7], [45]. Konersmann et al [26] state the lack of replication packages and the availability of tools used for the evaluation. To overcome this limitation and to increase reliability, we published a data set containing all evaluation data [3].…”
Section: Threats To Validitymentioning
confidence: 99%