Purpose: Recent studies have witnessed that self-attention modules can better solve the vision understanding problems by capturing long-range dependencies. However, there are very few works designing a lightweight self-attention module to improve the quality of MRI reconstruction. Furthermore, it can be observed that several important self-attention modules (e.g., the non-local block) cause high computational complexity and need a huge number of GPU memory when the size of the input feature is large. The purpose of this study is to design a lightweight yet effective spatial orthogonal attention module (SOAM) to capture long-range dependencies, and develop a novel spatial orthogonal attention generative adversarial network, termed as SOGAN, to achieve more accurate MRI reconstruction. Methods: We first develop a lightweight SOAM, which can generate two small attention maps to effectively aggregate the long-range contextual information in vertical and horizontal directions, respectively. Then, we embed the proposed SOAMs into the concatenated convolutional autoencoders to form the generator of the proposed SOGAN. Results: The experimental results demonstrate that the proposed SOAMs improve the quality of the reconstructed MR images effectively by capturing long-range dependencies. Besides, compared with state-of-the-art deep learning-based CS-MRI methods, the proposed SOGAN reconstructs MR images more accurately, but with fewer model parameters. Conclusions: The proposed SOAM is a lightweight yet effective self-attention module to capture long-range dependencies, thus, can improve the quality of MRI reconstruction to a large extent. Besides, with the help of SOAMs, the proposed SOGAN outperforms the state-of-the-art deep learning-based CS-MRI methods.