Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering 2014
DOI: 10.1145/2601248.2601300
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourcing software evaluation

Abstract: Crowdsourcing is an emerging online paradigm for problem solving which involves a large number of people often recruited on a voluntary basis and given, as a reward, some tangible or intangible incentives. It harnesses the power of the crowd for minimizing costs and, also, to solve problems which inherently require a large, decentralized and diverse crowd. In this paper, we advocate the potential of crowdsourcing for software evaluation. This is especially true in the case of complex and highly variable softwa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…Only five papers (e.g., [11], [39] conducted case studies related to CST in a real-world context. Last, certain aspects of CST were also examined with other research methods such as action research [40], focus group interviews [41], and quantitative survey research [42]. Figure 3 provides an overview over the research methods used in the identified papers.…”
Section: Figure 2: Number Of Publications Per Outletmentioning
confidence: 99%
See 1 more Smart Citation
“…Only five papers (e.g., [11], [39] conducted case studies related to CST in a real-world context. Last, certain aspects of CST were also examined with other research methods such as action research [40], focus group interviews [41], and quantitative survey research [42]. Figure 3 provides an overview over the research methods used in the identified papers.…”
Section: Figure 2: Number Of Publications Per Outletmentioning
confidence: 99%
“…Table 1 depicts the references identified per type of testing. [3], [43], [44], [45], [46], [47], [48] Non-Functional Testing [32], [43], [44], (performance); [49] (vulnerability); [50] (privacy) Validation and Acceptance Testing [11], [34], [35], [36], Usability Testing/ User Experience [35], [41], [51], [52], [53], [54], [55], [56], [57] Quality of Experience [26], [58], [59], [60], [61], [62] Research in functional and verification testing demonstrated that even complex testing tasks such as the verification of cross-browser issues [46] or the reproduction of context-sensitive app crashes [45] are possible to be tested with the crowd. In this vein, also non-functional testing such as performance testing [32] is possible.…”
Section: Application Of Crowdsourced Software Testingmentioning
confidence: 99%
“…Existing studies attempted to investigate risk factors and causes associated with SC paradigm, using roughly six different terms to indicate problems and troubles in the process of SC in respective context, i.e., namely barrier, challenge, issue, risk, concern, and uncertainty. Specifically, some use the term "barrier" [13][14][15][16] to refer to the communication and collaboration difficulties in SC; some use "challenge" [17][18][19][20][21][22][23][24][25][26][27][28] to adopt the SC paradigm from a broader ecosystem perspective; some use "risk" [9,11,[29][30][31][32] to refer to dynamic, task-level influencing factors at a finer granularity; some use "concern" [7,[33][34][35] to refer to typical concerns from the practitioner perspective concluded in a case study; and others use vaguely defined terms such as "issue" [12,36,37]. In a recent study, Law et al used the term "uncertainty" to express possible problems related to entities influencing SC [38].…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, in [5,6] the collective users' feedback was also encouraged for shaping software adaptation as users are important to communicate certain information that cannot be monitored and captured by automated means and also cannot be fully specified by designers at design time, yet are necessary to plan and support adaptation. Furthermore, authors in [7] stated that the crowd can enrich and keep the precision of engineers' knowledge about software evaluation via their iterative feedback at runtime (i.e. while the software is in use).…”
Section: Introductionmentioning
confidence: 99%
“…We follow a qualitative method of two phases including two focus groups in the first phase and three forums' analysis in the second. In the first, we build on the top of our initial findings on the topic in [7] and provide more detailed results on the different aspects of the feedback design and conduct of runtime feedback acquisition. In the second phase study, we undergo a detailed analysis of users' feedback on enterprise software applications by analysing actual users' feedback through examination of their posts and responses on three online forums.…”
Section: Introductionmentioning
confidence: 99%