Processing of nonstationary one-dimensional and twodimensional (2D) signals are ussualy performed by using high numerically consuming time-frequency and space/spatialfrequency (S/SF) tools, respectively. Being numerically quite complex, these solutions require significant time for calculation and then are usually unsuitable for real-time analysis, but also their application is severely restricted in practice. Hardware implementations, when possible, can overcome these problems. Besides, numerical complexity greatly increases in the 2D signals case, so that demands for hardware implementations of systems for processing of these signals, including their filtering, are more emphasized. However, chip dimensions are significantly enlarged in this case, as well as the power consumption and cost, while the processing speed is seriously reduced. Therefore, having in mind technology limitations in hardware realizations, these systems usually cannot be implemented. To overcome these problems, the register transfer level (RTL) design methodology-based and signal adaptive development of the S/SF filter, suitable for realtime and on-a-chip implementation, has been designed in [1]. However, to significantly suppress time requirements of the space/spatial-frequency-based systems, the graphic processing units (GPUs)-based implementation of these systems can be considered as the possible solution. In this paper, the RTL design methodology-based solution from [1] is compared with the corresponding GPUs-based solutions.