2014
DOI: 10.1109/tip.2014.2365698
|View full text |Cite
|
Sign up to set email alerts
|

Lossless Predictive Coding for Images With Bayesian Treatment

Abstract: Adaptive predictor has long been used for lossless predictive coding of images. Most of existing lossless predictive coding techniques mainly focus on suitability of prediction model for training set with the underlying assumption of local consistency, which may not hold well on object boundaries and cause large predictive error. In this paper, we propose a novel approach based on the assumption that local consistency and patch redundancy exist simultaneously in natural images. We derive a family of linear mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 29 publications
0
7
0
Order By: Relevance
“…This can be regarded as b being a parameter of the two-sided geometric distribution andb(x t−1 ) being its tuning method. In other studies [12][13][14][15][16][17], the authors proposed coding procedures f (x t−1 ; a, b, c a ) in which coefficients c a of each linear predictor are tuned by a certain methodc a (x t−1 ), e.g., the least squares method or weighted least squares method. In [18,19], the authors proposed coding procedures f (x t−1 ; a, b, c a , w) in which multiple predictors are combined according to another tuning parameter w that represents the weights of each predictor.…”
Section: Lossless Image Compression As a Image Processingmentioning
confidence: 99%
See 1 more Smart Citation
“…This can be regarded as b being a parameter of the two-sided geometric distribution andb(x t−1 ) being its tuning method. In other studies [12][13][14][15][16][17], the authors proposed coding procedures f (x t−1 ; a, b, c a ) in which coefficients c a of each linear predictor are tuned by a certain methodc a (x t−1 ), e.g., the least squares method or weighted least squares method. In [18,19], the authors proposed coding procedures f (x t−1 ; a, b, c a , w) in which multiple predictors are combined according to another tuning parameter w that represents the weights of each predictor.…”
Section: Lossless Image Compression As a Image Processingmentioning
confidence: 99%
“…However, there are some coding procedures f (x; a) [11][12][13][14][16][17][18][19][20]23] whose tuning parameter a can be regarded as a statistical parameter of an implicitly assumed statistical generative model p(x|a) by changing the viewpoint. (In some of these studies, the assumption of the stochastic generative model is claimed, but the distinction between the stochastic generative model and the code length assign vector is ambiguous, and the discussion about the difference between the expected code length and the entropy of the stochastic generative model is insufficient.)…”
Section: Lossless Image Compression On An Explicitly Redefined the Stochastic Generative Modelmentioning
confidence: 99%
“…However, there are some coding procedures f (x; a) [10]- [13], [15]- [19], [22] whose tuning parameter a can be regarded as a statistical parameter of an implicitly assumed statistical generative model p(x|a) by changing the viewpoint 3 . Further, its parameter tuning methodã(x) could be regraded as an heuristic approximation of the information-theoretically optimal estimationâ(x) ≈ã(x).…”
Section: Lossless Image Compression On An Explicitly Redefined the Stochastic Generative Modelmentioning
confidence: 99%
“…The previous studies based on this approach are [23] and [24]. In [23], they proposed a two-dimensional autoregressive model and the optimal coding algorithm by interpreting the basic procedure [10]- [12], [15], [22] of the predictive coding as a stochastic generative model. In [24], they proposed a two-dimensional autoregressive hidden Markov model by interpreting the predictor weighting procedure around a diagonal edge [17] as a stochastic generative model.…”
Section: Lossless Image Compression On An Explicitly Redefined the Stochastic Generative Modelmentioning
confidence: 99%
“…A commonly used prior is the notion of smoothness with the assumption that image signals tend to be piece-wise smooth. It has achieved great success for numerous inverse problems in image processing field such as compression [5], [6], image quality assessment [7], [8], and restoration [9], [10]. And as a typical inverse problem, BDE has no reason to be an exception.…”
Section: Introductionmentioning
confidence: 99%