2013 14th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS) 2013
DOI: 10.1109/wiamis.2013.6616123
|View full text |Cite
|
Sign up to set email alerts
|

Blending real with virtual in 3DLife

Abstract: Part of 3DLife's major goal to bring the 3D media Internet to life, concerns the development and wide-spread distribution of online tele-immersive (TI) virtual environments. As the techniques powering challenging tasks for user reconstruction and activity tracking within a virtual environment are maturing, along with consumer-grade availability of specialized hardware, this paper focuses on the simple practices used to make real-time tele-immersion within a networked virtual world a reality.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…To achieve a plausible and constraint-bound 3D animation flow, the data obtained by the remote capturing component of the module is received and translated to joint-specific local-coordinate orientation quaternions in the remote rendering component before the joint matrices are updated and applied in the skinning calculations taking place in the hardware-accelerated avatar rendering pipeline. The process is described in detail in Algorithm 1 [2]. The root joint's quaternion can safely be obtained by the 3 × 3 global orientation matrix sent by the capturer.…”
Section: Full Body Avatar Control Using Kinect-based Interfacementioning
confidence: 99%
See 1 more Smart Citation
“…To achieve a plausible and constraint-bound 3D animation flow, the data obtained by the remote capturing component of the module is received and translated to joint-specific local-coordinate orientation quaternions in the remote rendering component before the joint matrices are updated and applied in the skinning calculations taking place in the hardware-accelerated avatar rendering pipeline. The process is described in detail in Algorithm 1 [2]. The root joint's quaternion can safely be obtained by the 3 × 3 global orientation matrix sent by the capturer.…”
Section: Full Body Avatar Control Using Kinect-based Interfacementioning
confidence: 99%
“…Avateering or puppetting a virtual 3D avatar refers to the process of mapping a user's natural motoring activity and live performance to a virtual human's deforming control elements in order to faithfully reproduce the user's activity during rendering cycles. Already, a multitude of different schemes for full body avatar control exist in the scientific literature based on the skeleton tracking capabilities offered by software development kits and application programming interfaces plugging into the Kinect sensor [2,[11][12][13]. Similarly, avatar facial animation through vision-based methods has been explored following a similar approach in which facial features on the user's face are tracked via a Kinect [15] or single image acquisition methods [3,4,10] to generate animation via detailed face rigs or pre-defined blendshapes.…”
Section: Introductionmentioning
confidence: 99%