A major challenge for the integration of unmanned air vehicle (UAV) in the current civil applications is the sense-and-avoid (SAA) capability and the consequent possibility of midair collision avoidance. Although UAS have been shown to be efficient under different and varied conditions, their safety, reliability, and compliance with aviation regulations remain to be proven. In autonomous collision avoidance, UAS sense hazards with the sensors equipped on them and make decisions on manoeuvres autonomously for collision avoidance at the minimum safe time before impact. Thus, it is required for each individual UAS to have capabilities to recognize urgent threats and undertake the evasive manoeuvres immediately. Most of the current sense and avoid algorithms are composed of separated obstacle detection and tracking algorithm and decision-making algorithm on avoidance manoeuvre. Implementing artificial intelligence (AI), reinforcement learning (RL) algorithm combines both sense and avoid functions through state and action space. An autonomous agent learns to perform complex tasks by maximizing reward signals while interacting with its environment. It may be infeasible to test a policy in all contexts since it is difficult to ensure it works as broadly as intended. In these cases, it is important to trade-off between performance and robustness while learning a policy. This work develops an optimization method for a robust reinforcement learning policy for a nonlinear small unmanned air systems (sUAS), in AirSim using a model-free architecture. Using an on-line trained reinforcement learning agent, the difference of an optimized robust reinforcement learning (RRL) policy together with a conventional RL and RRL algorithm will be reproduced.