The 21st International ACM SIGACCESS Conference on Computers and Accessibility 2019
DOI: 10.1145/3308561.3353799
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting Blind Photography in the Context of Teachable Object Recognizers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(14 citation statements)
references
References 38 publications
0
14
0
Order By: Relevance
“…Fiebrink, for example, has shown considerable success in building digital music instruments that can be trained and respond to real-time embodied interaction by users [18]. Indeed, we find possible starting points in object recognizers which allow vision impaired users to build their own training sets [41], [49]. How such approaches might be incorporated into the mundane attunements between human actors, and those with different sensory capacities, presents an open question but one that seems in line with the premise of this humancentered machine learning and at least technically feasible.…”
Section: Salience In the Momentmentioning
confidence: 94%
See 2 more Smart Citations
“…Fiebrink, for example, has shown considerable success in building digital music instruments that can be trained and respond to real-time embodied interaction by users [18]. Indeed, we find possible starting points in object recognizers which allow vision impaired users to build their own training sets [41], [49]. How such approaches might be incorporated into the mundane attunements between human actors, and those with different sensory capacities, presents an open question but one that seems in line with the premise of this humancentered machine learning and at least technically feasible.…”
Section: Salience In the Momentmentioning
confidence: 94%
“…The design and use of ATs is a now established thread of research in HCI. Relevant, for example, are projects that have used computer vision to support people with vision impairments to complete tasks like identifying objects, people, and the contents of photos on social media [8], [40], [41], [49], [53], [78], [88], [90], [91], [92], [93], [94], [95]. For example, VizWiz [8] allowed blind people to photograph images for algorithms or crowd workers to describe.…”
Section: Ai Ats and Social Interactionsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, training an ML-enabled application as a personal assistive technology can itself be inaccessible when it requires skills and abilities similar to those the application is intended to support [23,40]. For example, a blind or visually impaired user is likely unable to use visual feedback when capturing images for personalizing an object recognizer-a challenge that Kacorri et al and others (e.g., [41,69]) frst examined via studies of users' needs in this context, and more recently began addressing through active feedback techniques to assist in image capture [49]. Indeed, in Nakao et al's [58] study of DHH users' technical understanding of ML, workshop participants struggled to choose acceptable sound samples for training due to a lack of non-auditory feedback.…”
Section: Human-centered Machine Learningmentioning
confidence: 99%
“…Technologies associated with AR have recently been used in a variety of systems intended to improve access to the world using smartphones. A number of systems have been developed to help blind people take better photos, generally by using automated approaches to assist in aiming the camera [29,36,65]. VizLens uses a combination of computer vision and crowdsourcing to recognize and guide a blind user through using an inaccessible physical interface [22,23].…”
Section: Ar For Making the World More Accessiblementioning
confidence: 99%