Spacecraft automated rendezvous, proximity maneuvering, and docking (ARPOD) play significant roles in many space missions, including on-orbit servicing and active debris removal. Precise modeling and prediction of spacecraft dynamics can be challenging due to the uncertainties and perturbation forces in the spacecraft operating environment and due to the multilayered structure of its nominal control system. Despite this complication, spacecraft maneuvers need to satisfy required constraints (thrust limits, line-of-sight cone constraints, relative velocity of approach constraints, etc.) to ensure safety and achieve ARPOD objectives. This paper considers an application of a learning-based reference governor (LRG) to spacecraft ARPOD operations to enforce constraints without relying on a dynamic model of the spacecraft during the mission. Similar to the conventional reference governor (RG), the LRG is an add-on supervisor to a closed-loop control system, serving as a prefilter on the command generated by the ARPOD planner. The LRG modifies, if it becomes necessary, the reference command to a constraint-admissible value to enforce specified constraints. The LRG is distinguished, however, by the ability to rely on learning instead of an explicit model of the system; and it guarantees constraints’ satisfaction during and after the learning. In this paper, the LRG is applied to the control of combined translational and rotational motions of a chaser spacecraft, and three case studies with different sets of safety constraints and thruster assumptions are used to demonstrate the benefits of the LRG in ARPOD missions.