In this research, we investigate control of a hypersonic vehicle (HV) following its reentry into the Earth’s atmosphere, using deep reinforcement learning (DRL) in a continuous space. Here, we incorporate the basic kinematic and force equations of motion for a vehicle in an atmospheric flight to formulate the reentry trajectory satisfying the boundary constraints and multiple mission related process constraints. The aerodynamic model of the vehicle emulates the properties of a common aero vehicle (CAV-H), while the atmospheric model of the Earth represents a standard model based on US Standard Atmosphere 1976, with significant simplification to the planetary model. In an unpowered flight, we then control the vehicle’s trajectory by perturbing its angle of attack and bank angle to achieve the desired objective, where the control problem is based on different actor-critic frameworks that utilize neural networks (NNs) as function approximators to select and evaluate the control actions in continuous state and action spaces. First, we train the model following each of the methods, that include on-policy proximal policy approximation (PPO) and off-policy twin delayed deterministic policy gradient (TD3). From the trajectory generated, we select a nominal trajectory for each algorithm that satisfies our mission requirements based on the reward model.