Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems 2021
DOI: 10.1145/3411764.3445507
|View full text |Cite
|
Sign up to set email alerts
|

Ask Me or Tell Me? Enhancing the Effectiveness of Crowdsourced Design Feedback

Abstract: Crowdsourced design feedback systems are emerging resources for getting large amounts of feedback in a short period of time. Traditionally, the feedback comes in the form of a declarative statement, which often contains positive or negative sentiment. Prior research has shown that overly negative or positive sentiment can strongly influence the perceived usefulness and acceptance of feedback and, subsequently, lead to ineffective design revisions. To enhance the effectiveness of crowdsourced design feedback, w… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 45 publications
0
4
0
Order By: Relevance
“…Later, Roitero et al (2021) looked at the longitudinal dimension of crowdsourced truthfulness assessment observing a consistency in the generated labels. La Barbera et al (2020) observed a political bias in crowd-generated truthfulness labels. In summary, the key conclusion that can be drawn from this body of work is that the crowd can be a viable alternative to experts or to automatically generate misinformation labels with the risk of potential bias.…”
Section: Related Work Crowdsourcing For Misinformation Detectionmentioning
confidence: 99%
See 2 more Smart Citations
“…Later, Roitero et al (2021) looked at the longitudinal dimension of crowdsourced truthfulness assessment observing a consistency in the generated labels. La Barbera et al (2020) observed a political bias in crowd-generated truthfulness labels. In summary, the key conclusion that can be drawn from this body of work is that the crowd can be a viable alternative to experts or to automatically generate misinformation labels with the risk of potential bias.…”
Section: Related Work Crowdsourcing For Misinformation Detectionmentioning
confidence: 99%
“…We divide the selected 120 political statements into 20 tasks/units each of them containing 6 statements. To ensure all units are balanced in terms of truthfulness level and political parties to mitigate the cognitive bias of crowd workers (La Barbera et al 2020;Draws et al 2022), we merge the original 6-level truthfulness scale into a 3-level one (true, in between and false) based on the observations made by Roitero et al (2020). The set of labels given by PolitiFact editors is considered the ground truth in this study.…”
Section: Crowdsourcing Task Designmentioning
confidence: 99%
See 1 more Smart Citation
“…We set standardised thresholds to classify annotations: a composite score between -0.05 and 0.05 indicates a neutral sentiment; a composite score greater than 0.05 indicates a positive sentiment; a composite score less than -0.05 indicates a negative sentiment. This method of sentiment classification has been widely used [20,25]. Table 6 shows the number of annotations under the distinct segments of abstract annotations.…”
Section: Negativementioning
confidence: 99%