Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 2020
DOI: 10.1145/3313831.3376145
|View full text |Cite
|
Sign up to set email alerts
|

"Hey Model!" – Natural User Interactions and Agency in Accessible Interactive 3D Models

Abstract: While developments in 3D printing have opened up opportunities for improved access to graphical information for people who are blind or have low vision (BLV), they can provide only limited detailed and contextual information. Interactive 3D printed models (I3Ms) that provide audio labels and/or a conversational agent interface potentially overcome this limitation. We conducted a Wizard-of-Oz exploratory study to uncover the multi-modal interaction techniques that BLV people would like to use when exploring I3M… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 16 publications
(20 citation statements)
references
References 51 publications
0
20
0
Order By: Relevance
“…To help overcome labelling limitations, 3D printed models are increasingly being combined with low-cost electronics and smart devices to produce interactive 3D printed models (I3Ms). I3Ms have been created and applied across many blind-specific contexts: mapping and navigation [27,31,62]; art [32,35]; and education [23,53,58]. Many I3Ms include button or touch-triggered audio labels that when activated describe different details of the model [21,23,24,27,31,52,59].…”
Section: Interactive 3d Printed Models (I3ms)mentioning
confidence: 99%
See 2 more Smart Citations
“…To help overcome labelling limitations, 3D printed models are increasingly being combined with low-cost electronics and smart devices to produce interactive 3D printed models (I3Ms). I3Ms have been created and applied across many blind-specific contexts: mapping and navigation [27,31,62]; art [32,35]; and education [23,53,58]. Many I3Ms include button or touch-triggered audio labels that when activated describe different details of the model [21,23,24,27,31,52,59].…”
Section: Interactive 3d Printed Models (I3ms)mentioning
confidence: 99%
“…Recent research has identified that blind users want rich interactions with I3Ms in ways similar to their personal technology, using modalities such as touch and conversational interfaces [53]. Our current work presents the next step in this nascent area, with the co-design of an I3M of the Solar System -Solar I3M.…”
Section: Introductionmentioning
confidence: 98%
See 1 more Smart Citation
“…Seipel et al [203] explored NLI for software visualization with AR devices. Reinders et al [191] studied blind and low vision (BLV) people's preference when exploring interactive 3D printed models (I3Ms). However, there has been no systems that support natural language interface for data visualization in an immersive way.…”
Section: Presentationmentioning
confidence: 99%
“…A widely cited paper by Miele et al (2006) introduces one of the first mapping platforms that used text-to-speech (TTS) in place of braille labeling, an approach adopted by many following studies. Additionally, mapping platforms that also respond to audio input are becoming more feasible with improved speech recognition technology (Abd Hamid and Edwards, 2013;Barbosa and Sá, 2020;Cavazos Quero et al, 2019;Reinders et al, 2020). Mobile computing is also frequently enrolled in multimodal technologies, such as in Yatani et al (2012), Matsuo et al (2020), and Giudice et al (2020) who incorporate tactual, auditory, and vibration feedback in mobile wayfinding applications, or Senette et al (2013) who lay a microcapsule tactile map over a mobile device with an app installed that can recognize it.…”
Section: Production: Multimodalitymentioning
confidence: 99%