2019 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-TW) 2019
DOI: 10.1109/icce-tw46550.2019.8991800
|View full text |Cite
|
Sign up to set email alerts
|

3D Virtual-Reality Interaction System

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 4 publications
0
7
0
Order By: Relevance
“…[105][106][107][108][109] Deep learning techniques have also been developed to extract 2D or 3D human posture information from camera data, and semantic segmentation algorithms can discern between pixels of virtual and real objects. [110][111][112][113][114][115][116][117][118] Brain-Computer Interface (BCI) technology, 119,120 which converts brain impulses into commands understood by computing devices, can also be used to merge the virtual and physical worlds. Research in BCI based on AI technology is ongoing.…”
Section: Visualization Systemmentioning
confidence: 99%
See 1 more Smart Citation
“…[105][106][107][108][109] Deep learning techniques have also been developed to extract 2D or 3D human posture information from camera data, and semantic segmentation algorithms can discern between pixels of virtual and real objects. [110][111][112][113][114][115][116][117][118] Brain-Computer Interface (BCI) technology, 119,120 which converts brain impulses into commands understood by computing devices, can also be used to merge the virtual and physical worlds. Research in BCI based on AI technology is ongoing.…”
Section: Visualization Systemmentioning
confidence: 99%
“…Techniques like Simultaneous Localization and Mapping (SLAM) can be used to learn the 3D structure and motion of an unfamiliar environment, while eye‐tracking can enhance user microinteractions 105–109 . Deep learning techniques have also been developed to extract 2D or 3D human posture information from camera data, and semantic segmentation algorithms can discern between pixels of virtual and real objects 110–118 . Brain‐Computer Interface (BCI) technology, 119,120 which converts brain impulses into commands understood by computing devices, can also be used to merge the virtual and physical worlds.…”
Section: Cutting‐edge Technologies For Enabling Metaversementioning
confidence: 99%
“…The situation is analyzed using mathematical models of the convex hull. To form a convex polygon, the gesture keypoints (0,1,2, 3,6,10,14,19,18,17) are connected by the convex hull algorithm. The convex hull of the detected hand is depicted by the blue line in Fig.…”
Section: Gesture Recognition Using the Convex Hull Algorithm And Hand...mentioning
confidence: 99%
“…Particularly under the background of epidemic situation, human-computer interaction has become increasingly important. In addition to human-computer interaction (HCI), hand pose estimation and gesture recognition [1][2][3] have applications in virtual real-ity (VR) and augmented reality (AR). During gesture recognition, the finger curvature characteristic is utilized based on the keypoint coordinates obtained by hand pose estimation.…”
Section: Introductionmentioning
confidence: 99%
“…The gesture is among the most commonly used expressions by humans, and accurate 3D hand pose estimation has already become a key technology in the fields of Human-Computer Interaction (HCI) and Virtual Reality (VR) [ 1 , 2 , 3 , 4 , 5 ]. It can help humans communicate with machines in a more natural way.…”
Section: Introductionmentioning
confidence: 99%