Proceedings of the 22nd International Academic Mindtrek Conference 2018
DOI: 10.1145/3275116.3275153
|View full text |Cite
|
Sign up to set email alerts
|

Implications of Audio and Narration in the User Experience Design of Virtual Reality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 18 publications
0
4
0
Order By: Relevance
“…An asymmetric co-occurrence matrix is created by calculating the concepts' relative co-occurrence frequencies. This matrix produces a two-dimensional concept map using a proprietary clustering algorithm based on Audio First [152][153][154][155][156] Audio and Multimodal Integration [157][158][159][160][161][162][163][164][165][166] Audio in Multimodal Comparison [167] Multimodal Experience [168][169][170][171][172][173][174][175] the spring-force model for the many-body problem [83]. The correctness of each concept in this semantic network generates a third hierarchical dimension, which displays the more general parent concepts at the higher levels.…”
Section: Computer-aided Qualitative Data Analysis Software (Caqdas)mentioning
confidence: 99%
See 2 more Smart Citations
“…An asymmetric co-occurrence matrix is created by calculating the concepts' relative co-occurrence frequencies. This matrix produces a two-dimensional concept map using a proprietary clustering algorithm based on Audio First [152][153][154][155][156] Audio and Multimodal Integration [157][158][159][160][161][162][163][164][165][166] Audio in Multimodal Comparison [167] Multimodal Experience [168][169][170][171][172][173][174][175] the spring-force model for the many-body problem [83]. The correctness of each concept in this semantic network generates a third hierarchical dimension, which displays the more general parent concepts at the higher levels.…”
Section: Computer-aided Qualitative Data Analysis Software (Caqdas)mentioning
confidence: 99%
“…Two of them use different eye-tracking techniques to detect the user's gaze and present information about the artwork or artistic building the user is looking at in a virtual environment. In particular, Kelling et al [167] uses eye position estimation based on an IMU sensor of an HMD, Sanchez et al [130] uses gaze to control the playback of audio elements, such as effect or noises, while reading an children's tales book, and Kwok et al [113] use eye-tracking glasses. In a fourth study [158], one of the author's interactive installations uses voice and pitch tracking to induce vibration in objects inside a virtual environment based on voice intensity.…”
Section: Interaction With the Virtual Environmentmentioning
confidence: 99%
See 1 more Smart Citation
“…In a previous evaluation of the initial prototype, several issues with the experience of the application were identified, including issues with navigation, poor image quality, and lack of engagement (Kauhanen et al 2017). In further iterations of the prototype, these issues were addressed and improved, especially so with the addition of audio and narration (Kelling et al 2018). The current study further examines the effectiveness of the improvements and also dives deeper into the complexities of the immersive experience in an attempt to provide insight for researchers and content creators to utilize and build upon.…”
Section: Hugo Simberg Vr: a Virtual Experience Of Cultural Journalismmentioning
confidence: 99%