Patients with Type 1 diabetes must closely monitor their blood glucose levels and inject insulin to control them. Automated glucose control methods that remove the need for human intervention have been proposed, and recently reinforcement learning has been used as an effective control method in simulation environments. However, its real-world application would require trial and error interaction with patients. As an alternative, offline reinforcement learning does not require interaction with humans and initial studies suggest promising results can be obtained with offline datasets, similar to classical machine learning algorithms. However, its application to glucose control has not yet been evaluated. In this study, we evaluated two offline reinforcement learning algorithms for blood glucose control and discussed their potential and shortcomings. We also evaluated the influence on training and performance of the method that generates the training datasets, as well as the influence of the type of trajectories used (single-method or mixed trajectories), the quality of the trajectories, and the size of the datasets. Our results show that one of the offline reinforcement learning algorithms evaluated, Trajectory Transformer, is able to perform at the same level as commonly used baselines such as PID and Proximal Policy Optimization.INDEX TERMS T1D blood glucose control, Offline reinforcement learning, Transformer, Artificial pancreas, Machine learning As a solution, several methods for automated glucose