1997
DOI: 10.1109/78.611218
|View full text |Cite
|
Sign up to set email alerts
|

Dither and data compression

Abstract: Abstract-This correspondence presents entropy analyses for dithered and undithered quantized sources. Two methods are discussed that reduce the increase in entropy caused by the dither. The first method supplies the dither to the lossless encoding-decoding scheme. It is argued that this increases the complexity of the encoding-decoding scheme. A method to reduce this complexity is proposed. The second method is the usage of a dead-zone quantizer. A procedure for determining the optimal dead-zone width in the m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

1998
1998
2018
2018

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 4 publications
0
9
0
Order By: Relevance
“…The effect of quantization (refer to quantization noise) is generally modeled by an additive noise that is uniformly distributed in and uncorrelated with the input signal [5], [6]. This quantization noise has a standard deviation of , or it equals 0.11 dB for dB.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The effect of quantization (refer to quantization noise) is generally modeled by an additive noise that is uniformly distributed in and uncorrelated with the input signal [5], [6]. This quantization noise has a standard deviation of , or it equals 0.11 dB for dB.…”
Section: Methodsmentioning
confidence: 99%
“…If the bit number is sufficiently large, this correlation is weak. Another type of quantizer is the dithered quantizer [6]. The dithered quantizer is based on the idea of decorrelation of signal and quantization error by randomized manipulations of .…”
Section: Brief Theory Of Quantizationmentioning
confidence: 99%
“…1, and compare the various capacities with and without side information of an additive noise channel. Since choosing a uniform distribution for X achieves capacity for any discrete additive noise channel, we have for Cases II and III C SI@REC = C SI@BOTH = log jX j 0 s p(s)H (Zs) = log jX j 0 H (Z j S) (19) and for Case I C NOSI = log jX j 0 H (Z): (20) We thus have the following chain of inequalities: where C SI@BOTH 0 C SI@TR = I (S;Z) C SI@BOTH 0 C NOSI = I (S; Z ): (22) We have equality in the second inequality in (21), i.e., C SI@REC = C SI@TR , iff the distributions of Z s , s 2 S differ by a shift only, in which case the optimumZ is statistically independent of S . We have equality in the first inequality in (21), i.e., C SI@TR = CNOSI , iff H (Z) = H (Z), i.e., iff the optimal shifts ft min (s); s 2 Sg are the set of zero shifts.…”
Section: Causal Side Information At the Encodermentioning
confidence: 99%
“…The assumption that quantization error is signal independent, uniformly distributed white noise is not strictly valid [13]. This assumption fails when the amplitude of the signal is comparable to the quantization step-size.…”
Section: Quantized Nmf Frameworkmentioning
confidence: 99%
“…Entropy-based distortion measures and bit allocations for compression are often based on the mean-squared error [12], [13], [14]. This motivates our selection of the Frobeniusnorm for QNMF.…”
Section: Related Workmentioning
confidence: 99%