2023
DOI: 10.1109/tvcg.2023.3247085
|View full text |Cite
|
Sign up to set email alerts
|

ConeSpeech: Exploring Directional Speech Interaction for Multi-Person Remote Communication in Virtual Reality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 46 publications
0
2
0
Order By: Relevance
“…For example, largegroup collaboration with many-to-many participation and various roles may cater for multi-modal ways of joining and might create larger physical and/or digital spaces to house participants. While there are studies that examine many-to-many interactions in MR [33,59], multi-modal and mixed presence collaboration [28,52] or scaled spatial architecture [54], these evaluative studies either manage complexity by making participants co-located [33,54] or isolate a particular scale problem to evaluate [28,52].…”
Section: Large-scale Distributed Collaboration In Mixed Reality (Mr)mentioning
confidence: 99%
“…For example, largegroup collaboration with many-to-many participation and various roles may cater for multi-modal ways of joining and might create larger physical and/or digital spaces to house participants. While there are studies that examine many-to-many interactions in MR [33,59], multi-modal and mixed presence collaboration [28,52] or scaled spatial architecture [54], these evaluative studies either manage complexity by making participants co-located [33,54] or isolate a particular scale problem to evaluate [28,52].…”
Section: Large-scale Distributed Collaboration In Mixed Reality (Mr)mentioning
confidence: 99%
“…As the cyber and physical spaces quickly merge, people exhibit a significant demand for information retrieval (IR) anywhere and anytime in their daily lives [10,14,15,22,56], no longer confined to a specific device or location. With advancements in the computational capabilities of wearable devices, the incorporation of a virtual assistant that can provide on-demand, in-situ answers to users' inquiries has the potential to greatly facilitate the interaction with surrounding targets [68,75] and enhance the naturalness of the user's information retrieval experience [3]. Specifically, smart glasses with gaze tracking open new possibilities for natural information retrieval techniques in daily scenarios by combining the voice and gaze modalities [35,62].…”
Section: Introductionmentioning
confidence: 99%