Approaches to keeping a dynamical system within state constraints typically rely on a modelbased safety condition to limit the control signals. In the face of significant modeling uncertainty, the system can suffer from important performance penalties due to the safety condition becoming overly conservative. Machine learning can be employed to reduce the uncertainty around the system dynamics, and allow for higher performance. In this article, we propose the safe uncertainty learning principle, and argue that the learning must be properly structured to preserve safety guarantees. For instance, robust safety conditions are necessary, and they must be initialized with conservative uncertainty bounds prior to learning. Also, the uncertainty bounds should only be tightened if the collected data sufficiently capture the future system behavior. To support the principle, two example problems are solved with control barrier functions: a lane-change controller for an autonomous vehicle, and an adaptive cruise controller. This work offers a way to evaluate if machine learning preserves safety guarantees during the control of uncertain dynamical systems. It also highlights challenging aspects of learning for control.