In existing pyramid-based spatially scalable coding schemes, such as H.264/MPEG-4 SVC (scalable video coding), video frame at a certain high-resolution layer is mainly predicted either from the same frame at the next lower resolution layer, or from the temporal neighboring frames within the same resolution layer. But these schemes fail to exploit both kinds of correlation simultaneously and therefore cannot remove the redundancies among resolution layers efficiently. This paper extends the idea of spatiotemporal subband transform and proposes a general in-scale motion compensation technique for pyramid-based spatially scalable video coding. Video frame at each high-resolution layer is partitioned into two parts in frequency. Prediction for the lowpass part is derived from the next lower resolution layer, whereas prediction for the highpass part is obtained from neighboring frames within the same resolution layer, to further utilize temporal correlation. In this way, both kinds of correlation are exploited simultaneously and the cross-resolution-layer redundancy can be highly removed. Furthermore, this paper also proposes a macroblock-based adaptive in-scale technique for hybrid spatial and SNR scalability. Experimental results show that the proposed techniques can significantly improve the spatial scalability performance of H.264/MPEG-4 SVC, especially when the bit-rate ratio of lower resolution bit stream to higher resolution bit stream is considerable.Index Terms-H.264/MPEG-4 SVC, in-scale motion compensation, inter-layer prediction, scalable video coding, spatial scalability.