2010
DOI: 10.1111/j.1467-8659.2009.01585.x
|View full text |Cite
|
Sign up to set email alerts
|

Bidirectional Texture Function Compression Based on Multi‐Level Vector Quantization

Abstract: The Bidirectional Texture Function (BTF) is becoming widely used for accurate representation of real-world material appearance. In this paper a novel BTF compression model is proposed. The model resamples input BTF data into a parametrization, allowing decomposition of individual view and illumination dependent texels into a set of multi-dimensional conditional probability density functions. These functions are compressed in turn using a novel multi-level vector quantization algorithm. The result of this algor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
35
0

Year Published

2013
2013
2014
2014

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(37 citation statements)
references
References 43 publications
2
35
0
Order By: Relevance
“…Additionally, this technique has the considerable advantage that the texture mapping units of the GPU can be utilized to perform interpolation both in the angular and spatial domain. This reduces the decompression costs considerably in comparison with techniques using clustering [Müller et al 2003], sparse representations [Ruiters and Klein 2009] or vector quantization [Havran et al 2010], to name just a few. There are also tensor factorization based approaches [Wu et al 2008], which are, like our wavelet compression, capable of further compressing the spatial dimensions.…”
Section: Btf Compression and Realtime Renderingmentioning
confidence: 99%
“…Additionally, this technique has the considerable advantage that the texture mapping units of the GPU can be utilized to perform interpolation both in the angular and spatial domain. This reduces the decompression costs considerably in comparison with techniques using clustering [Müller et al 2003], sparse representations [Ruiters and Klein 2009] or vector quantization [Havran et al 2010], to name just a few. There are also tensor factorization based approaches [Wu et al 2008], which are, like our wavelet compression, capable of further compressing the spatial dimensions.…”
Section: Btf Compression and Realtime Renderingmentioning
confidence: 99%
“…Tsai et al [7] further pushes the idea to a k-clustered tensor approximation, which is suitable for efficient real-time rendering of the compressed BTFs. Havran et al [8] compressed the BTF by adopting a multidimensional conditional probability density function in conjunction with vector quantization. Mipmapping was handled by directly applying the same algorithm to averaged BTF data.…”
Section: Previous Workmentioning
confidence: 99%
“…The major limitation with matrix-based approaches is that the high dimensional structure of the BTF data set is not exploited on the compression stage; matrices are two-dimensional and only correlations among columns are being exploited. Havran et al [6] decompose individual ABRDFs into multidimensional conditional probability density functions, which are then further compressed using multi-level vector quantization. The proposed approach achieves high compression ratios, supports mip-mapping and importance sampling for Monte Carlo based renderers, and the authors report rendering rates of up to 170 fps on the GPU for point based lighting.…”
Section: Related Workmentioning
confidence: 99%
“…The efficiency of these compressed approaches, together with improvements in BTF acquisition systems and the increasing computational power of GPUs, contributed to the increasing popularity of BTF based rendering systems [1,3,4]. However, most of these highly efficient compressed representations do not allow intuitive editing of the BTF data [5,6,7], as required by product designers. Interactive global illumination rendering with ordinary computing resources using an intuitively editable representation of the BTF remains an elusive goal.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation