h i g h l i g h t s• We propose a memory saving method for the lifting-based discrete wavelet transform on a GPU. • Our method reduces the memory usage by unifying the input buffer and output buffer. • Our compact data representation interprets a sequence of memory addresses as a circular permutation.• Experimental results on four GPU architectures are presented. • Our unified method is capable of transforming a twice large problem size with a maximum speedup of 3.9.
a b s t r a c tIn this study, to improve the speed of the lifting-based discrete wavelet transform (DWT) for large-scale data, we propose a parallel method that achieves low memory usage and highly efficient memory access on a graphics processing unit (GPU). The proposed method reduces the memory usage by unifying the input buffer and output buffer but at the cost of a working memory region that is smaller than the data size n. The method partitions the input data into small chunks, which are then rearranged into groups so different groups of chunks can be processed in parallel. This data rearrangement scheme classifies chunks in terms of data dependency but it also facilitates transformation via simultaneous access to contiguous memory regions, which can be handled efficiently by the GPU. In addition, this data rearrangement is interpreted as a product of circular permutations such that a sequence of seeds, which is an order of magnitude shorter than input data, allows the GPU threads to compute the complicated memory indexes needed for parallel rearrangement. Because the DWT is usually part of a processing pipeline in an application, we believe that the proposed method is useful for retaining the amount of memory for use by other pipeline stages.