2015
DOI: 10.1179/0308018814z.000000000101
|View full text |Cite
|
Sign up to set email alerts
|

The Interface Implications of Understanding Readers

Abstract: This paper discusses our work conducting reader studies with prototypical reading interfaces designed in the context of funded Digital Humanities (DH) research projects. We discuss our conceptual and methodological approaches and contemplate which methods have been most productive in advancing our understandings of reading practices and interfaces. To situate this discussion, we first contemplate ways in which reading and new reading interfaces have been conceptualized by audiences within and beyond academia t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…A longstanding anxiety of the digital humanities has been the degree to which the tools it develops are (or are not) taken up by others, 2 a problem that has been addressed historically through user studies and formal partnerships. Over time, mixed-mode qualitative and quantitative user-testing methods drawn from the social sciences, such as surveys, semi-structured interviews, screen capture, and sideshadowing (Carr 2004;Stafford et al 2008;Toms and O'Brien 2008;Gainor et al 2009;Warwick 2012;Nowviskie 2013;Cutic et al 2016;Miya et al 2016;Tracy 2016), have expanded to include methods drawn from participatory design (e.g., Schuler and Namoika 1993;Björgvinsson, Pelle & Hillgren 2010) and cooperative design (e.g., Greenbaum and Kyng 1991;Steen 2013), which take seriously Phillips and Zovros' (2013) suggestion that 'participants themselves should be counted as members of the research team' (Dobson 2015, 12); Teresa Dobson's work is notable in this regard (Dobson 2015). These methods have been effective in evaluating experimental prototypes, especially when assessment protocols are designed to encourage what Dobson calls 'generative feedback,' or responses 'that invite one to step away from the ideas instantiated in the prototype and consider the broader context, such as the values inherent in, say, a disciplinary approach to knowledge and knowledge mobilization as evidenced by the responses of readers' (Dobson 2015, 6).…”
Section: Living Labs and The Digital Humanitiesmentioning
confidence: 99%
“…A longstanding anxiety of the digital humanities has been the degree to which the tools it develops are (or are not) taken up by others, 2 a problem that has been addressed historically through user studies and formal partnerships. Over time, mixed-mode qualitative and quantitative user-testing methods drawn from the social sciences, such as surveys, semi-structured interviews, screen capture, and sideshadowing (Carr 2004;Stafford et al 2008;Toms and O'Brien 2008;Gainor et al 2009;Warwick 2012;Nowviskie 2013;Cutic et al 2016;Miya et al 2016;Tracy 2016), have expanded to include methods drawn from participatory design (e.g., Schuler and Namoika 1993;Björgvinsson, Pelle & Hillgren 2010) and cooperative design (e.g., Greenbaum and Kyng 1991;Steen 2013), which take seriously Phillips and Zovros' (2013) suggestion that 'participants themselves should be counted as members of the research team' (Dobson 2015, 12); Teresa Dobson's work is notable in this regard (Dobson 2015). These methods have been effective in evaluating experimental prototypes, especially when assessment protocols are designed to encourage what Dobson calls 'generative feedback,' or responses 'that invite one to step away from the ideas instantiated in the prototype and consider the broader context, such as the values inherent in, say, a disciplinary approach to knowledge and knowledge mobilization as evidenced by the responses of readers' (Dobson 2015, 6).…”
Section: Living Labs and The Digital Humanitiesmentioning
confidence: 99%