This paper presents a novel framework for human-agent teaming grounded in the principles of Reinforcement Learning (RL). Recognizing the need for a unified language across various disciplines, we utilize RL concepts to provide a standard for the understanding and evaluation of diverse teaming strategies. Our framework extends beyond traditional RL constructs, integrating aspects such as belief states, prior knowledge, social considerations, situational awareness, and mental models. A particular focus is placed on the role of ethics and trust in effective teaming. Additionally, we discuss how sensor data, perception models, and actuator modules can be incorporated, emphasizing the adaptability of our framework to a broad range of tasks and environments. We believe this work forms a substantial contribution to the field of human-agent teaming, establishing a solid foundation for future research and application.