Despite its success in various research domains, Reinforcement Learning (RL) faces challenges in its application to air transport operations due to the rigorous certification standards of the aviation industry. The existing regulatory framework fails to provide adequate, acceptable means of compliance for RL applications, and thus, there is no legal framework for their safe deployment yet. Guidelines must be formulated to certify RL models aimed at air transport operations to enable real-world utilisation of these promising methods. These guidelines must consider the unique characteristics of these models, deviating from the methodology of current guidelines crafted before the emergence of ML applications. The paper proposes novel certification requirements for RL models based on their technical characteristics, safety-criticality, and autonomy. This framework covers the choice of the RL algorithm and analyses the actions, agents, environment, and potential hazards and risks of the RL application. Additionally, this work outlines the evidence the certification applicant must present to demonstrate compliance with these requirements. While this framework is not a complete solution for the complex problem of certifying RL, it is intended to serve as an initial framework which can be extended in cooperation with regulatory entities.