2017
DOI: 10.1145/3134756
|View full text |Cite
|
Sign up to set email alerts
|

The Effect of Computer-Generated Descriptions on Photo-Sharing Experiences of People with Visual Impairments

Abstract: Like sighted people, visually impaired people want to share photographs on social networking services, but find it difficult to identify and select photos from their albums. We aimed to address this problem by incorporating state-of-the-art computer-generated descriptions into Facebook's photo-sharing feature. We interviewed 12 visually impaired participants to understand their photo-sharing experiences and designed a photo description feature for the Facebook mobile application. We evaluated this feature with… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
33
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 61 publications
(33 citation statements)
references
References 49 publications
0
33
0
Order By: Relevance
“…The design and use of ATs is a now established thread of research in HCI. Relevant, for example, are projects that have used computer vision to support people with vision impairments to complete tasks like identifying objects, people, and the contents of photos on social media [8], [40], [41], [49], [53], [78], [88], [90], [91], [92], [93], [94], [95]. For example, VizWiz [8] allowed blind people to photograph images for algorithms or crowd workers to describe.…”
Section: Ai Ats and Social Interactionsmentioning
confidence: 99%
See 2 more Smart Citations
“…The design and use of ATs is a now established thread of research in HCI. Relevant, for example, are projects that have used computer vision to support people with vision impairments to complete tasks like identifying objects, people, and the contents of photos on social media [8], [40], [41], [49], [53], [78], [88], [90], [91], [92], [93], [94], [95]. For example, VizWiz [8] allowed blind people to photograph images for algorithms or crowd workers to describe.…”
Section: Ai Ats and Social Interactionsmentioning
confidence: 99%
“…For example, VizWiz [8] allowed blind people to photograph images for algorithms or crowd workers to describe. Subsequently, a suite of apps and services providing such access have become widely available and affordable [4], [27], [55], [88], [91]. However, these apps and services respond little to social environments and cues.…”
Section: Ai Ats and Social Interactionsmentioning
confidence: 99%
See 1 more Smart Citation
“…While some definitions focus specifically on creating "good" captions for people who are blind [9,11,22,79,84,94,95,100,108], only a few studies directly integrate preferences reported by people who are blind [94]. To our knowledge, our work is the first to identify crowdworkers' questions and concerns about how to create good image captions-especially for people who are blind.…”
Section: Defining a "Good" Captionmentioning
confidence: 99%
“…For instance, images of maps were investigated 169 the most (N = 10), followed by graphs (N = 6). Interestingly, while the accessibility of 170 photographs for BLV were largely investigated in terms of web accessibility [20,38], only 171 three out of 33 papers aimed to support photographs particularly using touchscreen 172 devices. In addition, as touchscreen devices themselves have accessibility issues for 173 people with visual impairments requiring accurate hand-eye coordination [25], four 174 papers focused on improving the accessibility of the touchscreen-based interface itself 175 such as soft buttons [39][40][41] and gestures [42].…”
mentioning
confidence: 99%