CHI Conference on Human Factors in Computing Systems 2022
DOI: 10.1145/3491102.3517451
|View full text |Cite
|
Sign up to set email alerts
|

Understanding and Designing Avatar Biosignal Visualizations for Social Virtual Reality Entertainment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 29 publications
(9 citation statements)
references
References 57 publications
1
8
0
Order By: Relevance
“…Lastly, we found that participants were willing to see others' breathing signals, but more reluctant to share their own. This echoes fndings from Hassib et al [17] and Lee et al [27]. To this end, our work provided unique insights into the role that individual breath visualization modalities play within social breath-responsive systems.…”
Section: Towards Multi-modal Mutual Breath Awareness In Dyadic Collab...supporting
confidence: 78%
See 1 more Smart Citation
“…Lastly, we found that participants were willing to see others' breathing signals, but more reluctant to share their own. This echoes fndings from Hassib et al [17] and Lee et al [27]. To this end, our work provided unique insights into the role that individual breath visualization modalities play within social breath-responsive systems.…”
Section: Towards Multi-modal Mutual Breath Awareness In Dyadic Collab...supporting
confidence: 78%
“…This relates to recent eforts toward sensible human-computer integration [7,21,41], where one may sense information that is otherwise difcult to perceive and recognize due to physical or cognitive limitations, or otherwise normally hidden during face-to-face interactions. Researchers have found that "expressive" biosignals, when displayed as a social cue, have the potential to enhance interpersonal communication and increase interoceptive awareness [15,29,36], which can help us better recognize and express our own and others' emotional and physical states across real [13,14,20,31,37] and virtual reality environments [22,27,52]. While several works have explored a wide-range of breath-responsive systems (cf., [22,47,52]), with some works exploring multiple modalities (visual, audio, haptic) [14], it remains unclear how the modality of social breath signals can infuence collaborative experiences.…”
Section: Introductionmentioning
confidence: 99%
“…Negative emotions are typically kept private, whereas positive feelings are shared. Consequently, the ability to observe a partner's biosignals might not be a positive design feature as it could lead to constant emotional awareness, causing distraction and hindering a meditative state [35]. Surprisingly, in LKM, privacy was less prioritized, with fewer requests for permissions-related settings, such as visibility controls or interaction permissions [40].…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Then, we established a digital moodboard of existing biosignal representations presented in these papers to view the common design choices. In general, among the design examples we extracted, biosignal information was provided to users in three main modalities: visual (e.g., [18,30,38,39,42,69]), haptic (e.g., [4,36,63,80]), and auditory (e.g., [21,33]). Audio feedback was not considered in our context because it may cause potential distractions if provided in parallel [21] with videos, especially those that possess especially rich auditory expressions [6].…”
Section: Brainstorming Of Modalities and Features For Encoding Frissonmentioning
confidence: 99%