2017
DOI: 10.1145/3134680
|View full text |Cite
|
Sign up to set email alerts
|

Novice and Expert Sensemaking of Crowdsourced Design Feedback

Abstract: Online feedback exchange (OFE) systems are an increasingly popular way to test concepts with millions of target users before going to market. Yet, we know little about how designers make sense of this abundant feedback. This empirical study investigates how expert and novice designers make sense of feedback in OFE systems. We observed that when feedback conflicted with frames originating from the participant's design knowledge, experts were more likely than novices to question the inconsistency, seeking critic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 26 publications
1
5
0
Order By: Relevance
“…However, more aspects likely come into play. Also, given that designers with varying expertise make sense of and provide feedback differently [14,16], it would be interesting to determine if question-based feedback is perceived differently by non-professional and professional designers.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, more aspects likely come into play. Also, given that designers with varying expertise make sense of and provide feedback differently [14,16], it would be interesting to determine if question-based feedback is perceived differently by non-professional and professional designers.…”
Section: Discussionmentioning
confidence: 99%
“…Only participants with an acceptance rate above 97% and more than 500 approved HITs were accepted. The majority of participants (16) were aged between 30-40. Three were between 20-30 years old.…”
Section: Participantsmentioning
confidence: 99%
“…In crowd feedback systems, feedback requesters face the challenge of having to explore and analyse a potentially high number of responses. Crowd feedback systems thus call for processing and aggregating of individual feedback items to support the feedback receiver in the related sensemaking activities [14]. Crowd feedback systems address this issue in a number of different ways.…”
Section: Aggregating Design Feedbackmentioning
confidence: 99%
“…They complement existing algorithmic, visual, and crowd debugging systems by detecting and describing complex failures in deployment, like those developers may not have considered due to their own blind spots and biases [6,25,49,56]. Visualizing crowdsourced failure reports continues the emerging theme of distributed or crowd sensemaking [20,21,23,32,33], which has been used to improve clustering [3,13], summarize bug reports [27], and learn model features [17].…”
Section: Introductionmentioning
confidence: 99%