2014
DOI: 10.1109/tmm.2014.2307553
|View full text |Cite
|
Sign up to set email alerts
|

Stationary Probability Model for Microscopic Parallelism in JPEG2000

Abstract: Abstract-Parallel processing is key to augmenting the throughput of image codecs. Despite numerous efforts to parallelize wavelet-based image coding systems, most attempts fail at the parallelization of the bitplane coding engine, which is the most computationally intensive stage of the coding pipeline. The main reason for this failure is the causality with which current coding strategies are devised, which assumes that one coefficient is coded after another. This work analyzes the mechanisms employed in bitpl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
16
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 16 publications
(16 citation statements)
references
References 29 publications
0
16
0
Order By: Relevance
“…with the same wavelet filter-bank are statistically similar [35], [37], [38]. A more in-depth study on this stationary probability model can be found in [33].…”
Section: B Context Formation and Probability Modelmentioning
confidence: 87%
See 3 more Smart Citations
“…with the same wavelet filter-bank are statistically similar [35], [37], [38]. A more in-depth study on this stationary probability model can be found in [33].…”
Section: B Context Formation and Probability Modelmentioning
confidence: 87%
“…The main problem of coding pass parallelism is that in order to code a coefficient in the current pass, some information of its neighbors coded in previous passes may be needed. This is addressed by delaying the beginning of the execution of each coding pass some coefficients with respect to its immediately previous pass [4], [33]. Such an elaborate strategy is not suitable for SIMD computing since each coding pass carries out different operations, which generates divergence among threads.…”
Section: A Scanning Ordermentioning
confidence: 99%
See 2 more Smart Citations
“…The main idea is to use a fixed probability for each context and bitplane. As shown in [10], this model is based on the empirical evidence that the probabilities employed to code all symbols with a context are mostly regular in the same bitplane. The probability estimates are precomputed off-line and stored in a lookup table (LUT) that is known by the encoder and the decoder.…”
Section: B Context Formation and Probability Modelmentioning
confidence: 99%