Reverse-Time Migration (RTM) is a two-way wave-equation based method used to generate images of the Earth’s subsurface. RTM has been successfully used in seismic imaging as it allows defining complex structural areas. However, RTM is a highly computational expensive algorithm that requires the computation of both the source and the receiver wavefields for each shot. Fortunately, numerical methods that compute the wave propagation using the wave equation are highly parallelizable, so they can take leverage on GPU features. However, the main problem of a GPU-RTM implementation is memory management. To take advantage of the GPU computing capabilities, the transfers to host RAM memory storage, or more expensive hard disk storage must be avoided. We present the analysis of three different strategies to implement RTM using only the memory available on a single GPU: (1) Stored wavefield checkpointing (2) Backpropagation of source wavefield using stored boundaries, and (3) Backpropagation of source wavefield using the two last snapshots and random boundaries, showing that the large amount of memory required in the first two strategies becomes a restriction over the model size. The last method (using random boundary conditions) is shown as a suggested solution to the memory problem of using a single GPU.