2007
DOI: 10.1109/tmm.2006.886326
|View full text |Cite
|
Sign up to set email alerts
|

Model-Based Power Aware Compression Algorithms for MPEG-4 Virtual Human Animation in Mobile Environments

Abstract: Abstract-MPEG-4 body animation parameters (BAP) are used for animation of MPEG-4 compliant virtual human-like characters. Distributed virtual reality applications and networked games on mobile computers require access to locally stored or streamed compressed BAP data. Existing MPEG-4 BAP compression techniques are inefficient for streaming, or storing, BAP data on mobile computers, because: 1) MPEG-4 compressed BAP data entails a significant number of CPU cycles, hence significant, unacceptable power consumpti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
7
0

Year Published

2009
2009
2015
2015

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 20 publications
1
7
0
Order By: Relevance
“…Table. I presents the compression simulation of 12 motion data for lossless compression in the image domain. In the experiment, the VCA mapping technique is able to achieve an average of 7.363 compression ratio against the original AMC file without significant degradation in the reconstructed motion quality outperforming the existing technique proposed in [2] reporting an average of 1.7-2.5 compression rate for varying DEF of 0.17-14. The only quality lost in our process is due to the double to integer casting during the image mapping stage.…”
Section: Resultsmentioning
confidence: 88%
See 1 more Smart Citation
“…Table. I presents the compression simulation of 12 motion data for lossless compression in the image domain. In the experiment, the VCA mapping technique is able to achieve an average of 7.363 compression ratio against the original AMC file without significant degradation in the reconstructed motion quality outperforming the existing technique proposed in [2] reporting an average of 1.7-2.5 compression rate for varying DEF of 0.17-14. The only quality lost in our process is due to the double to integer casting during the image mapping stage.…”
Section: Resultsmentioning
confidence: 88%
“…The displacement error per frame DEF proposed in [2] is used for this purpose. We define the resulting motion data of all joints from the decoded DOF as J where J(t) = j 1 , j 2 , ..., j n and j i = {x i , y i , z i }.…”
Section: Error Metricmentioning
confidence: 99%
“…In our experiment, the metric displacement error per frame (DEF) from [1] is used for this purpose. Since the animation quality is usually more dependent on the accuracy of the 3D position than that of the modification angle, thus our evaluation of the DEF is measured respect to the resulting coordinate positions of the avatar's joint from the reconstructed angle modification.…”
Section: Distortion Metricmentioning
confidence: 99%
“…Table 4.1 presents the compression simulation of 12 motion data for lossless compression in the image domain. In the experiment, the VCA mapping technique is able to achieve an average of 7.363 compression ratio against the original AMC file without significant degradation in the reconstructed motion quality outperforming the existing technique proposed in [36] reporting an average of 1.7-2.5 compression ratio for varying DEF of 0.17-14. The only quality lost in our process is due to the double to integer casting during the image mapping stage.…”
Section: Experimental Simulationmentioning
confidence: 87%
“…Skeletal based 3D model has the unique characteristics of both structural and temporal coherency which causes the applications of 3D mesh compression techniques unsuitable, if it is directly applied on the VCA. Previously, [36] proposed the Model based power aware method to achieve efficient compression of the VCA. The structural attributes of the VCA are exploited through the combination of BAP sparsing and indexing techniques.…”
Section: Virtual Human Representationmentioning
confidence: 99%