Proceedings of the 2017 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Sof 2017
DOI: 10.1145/3133850.3133863
|View full text |Cite
|
Sign up to set email alerts
|

Can we crowdsource language design?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…Chamberlain [17] compared functional-style to literal-style approaches for specifying topology of streaming applications (i.e., pipes-and-filters style applications) using Mechanical Turk, finding that users were more likely to prefer literal-style specifications, and experienced programmers were more likely to understand the literal-style specifications than the functionalstyle ones. Wilson et al [103] investigated crowdsourcing more esoteric language design decisions, finding low consistency (people did not give consistent answers when asked similar questions repeatedly) and low consensus (people did not agree with each other on which design choice was best). Crowdsourcing approaches can scale well, but typically require that the studies be of relatively short duration.…”
Section: Related Workmentioning
confidence: 99%
“…Chamberlain [17] compared functional-style to literal-style approaches for specifying topology of streaming applications (i.e., pipes-and-filters style applications) using Mechanical Turk, finding that users were more likely to prefer literal-style specifications, and experienced programmers were more likely to understand the literal-style specifications than the functionalstyle ones. Wilson et al [103] investigated crowdsourcing more esoteric language design decisions, finding low consistency (people did not give consistent answers when asked similar questions repeatedly) and low consensus (people did not agree with each other on which design choice was best). Crowdsourcing approaches can scale well, but typically require that the studies be of relatively short duration.…”
Section: Related Workmentioning
confidence: 99%
“…A number of studies employed workers in order to identify preferences and consensus; for example, researchers have sought to identify preferences about the order of writing expressions such as new byte [10 + length] and new byte[length + 10] [4], about gradual typing semantics [34], and about specific features in a novel programming language [35].…”
Section: Surveysmentioning
confidence: 99%
“…As with much research, it is also unclear here whether the filedrawer effect-not writing up results that contradict researchers' expectations-has resulted in a publication bias that suppresses researchers' negative experiences with crowdsourcing. That being said, the reviewed studies do also report some "negative results," such as identifying that MTurk workers (or programmers in general) should likely not be used for finding a consensus for features of a new programming language [35], and observing that a particular feature of a system developed by the researchers is less efficient than other approaches [22].…”
Section: Study Designs and Data Qualitymentioning
confidence: 99%
See 1 more Smart Citation
“…Tunnell Wilson et al [34] have previously tried crowdsourcing language design decisions using Mechanical Turk. They created surveys for a variety of language features to investigate Turkers' expectations and consistency.…”
Section: User Studiesmentioning
confidence: 99%