This paper introduces a new cross-domain dataset, CubeSat-CDT, that includes 21 trajectories of a real CubeSat acquired in a laboratory setup, combined with 65 trajectories generated using two rendering engines -i.e. Unity and Blender. The three data sources incorporate the same 1U CubeSat and share the same camera intrinsic parameters. In addition, we conduct experiments to show the characteristics of the dataset using a novel and efficient spacecraft trajectory estimation method, that leverages the information provided from the three data domains. Given a video input of a target spacecraft, the proposed end-to-end approach relies on a Temporal Convolutional Network that enforces the inter-frame coherence of the estimated 6-Degree-of-Freedom spacecraft poses. The pipeline is decomposed into two stages; first, spatial features are extracted from each frame in parallel; second, these features are lifted to the space of camera poses while preserving temporal information. Our results highlight the importance of addressing the domain gap problem to propose reliable solutions for close-range autonomous relative navigation between spacecrafts. Since the nature of the data used during training impacts directly the performance of the final solution, the CubeSat-CDT dataset is provided to advance research into this direction.