2004
DOI: 10.1007/978-3-540-27817-7_75
|View full text |Cite
|
Sign up to set email alerts
|

Towards an Integrated Publishing Chain for Accessible Multimodal Documents

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0
1

Year Published

2004
2004
2013
2013

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 1 publication
0
2
0
1
Order By: Relevance
“…Multimodal interaction with documents is considered the execution of the presentation and navigation tasks according to reader's preferences in one of three modalities, visual, acoustic and haptic or in any preferable combination. Guillon et al (2004) proposed an integrated publishing procedure for accessible multimodal documents based on DAISY 3.0 (DAISY, 2008). World Wide Web Consortium (W3C, 2008c) proposes guidelines for the multimodal interaction (W3C, 2008b).…”
Section: Multimodal Accessibility Of Documents 453mentioning
confidence: 99%
“…Multimodal interaction with documents is considered the execution of the presentation and navigation tasks according to reader's preferences in one of three modalities, visual, acoustic and haptic or in any preferable combination. Guillon et al (2004) proposed an integrated publishing procedure for accessible multimodal documents based on DAISY 3.0 (DAISY, 2008). World Wide Web Consortium (W3C, 2008c) proposes guidelines for the multimodal interaction (W3C, 2008b).…”
Section: Multimodal Accessibility Of Documents 453mentioning
confidence: 99%
“…En cualquier caso ya ha sido demostrada su utilidad como formato de contenido, punto de partida de documentos html, pdf, mp3 y Braille (Guillon, 2004). Editoriales y bibliotecas deben tomarlo seriamente en consideración para sus planes futuros de edición o digitalización (Tank; Frederiksen, 2007).…”
Section: Conclusionesunclassified
“…Existing accessibility tools like screen readers, Braille terminals and talking browsers are increasingly helping persons suffering from visual incapacities to access and manipulate information, and perform various kinds of activities previously deemed unfeasible for the visually impaired. Yet, these techniques are effective when accessing text-based contents [1,2,5,10,27], but remain fairly limited when handling visual contents. Most studies, in this field [14,15,20,28,31] focus on low-vision users by providing visual aids and image enhancement techniques like applying image filters (image contrast manipulation [22], spatial filtering [19], adaptive thresholding [21], and compensation filters [6]) to adapt image quality to the user's visual deficiency.…”
Section: Introductionmentioning
confidence: 99%