In response to the demand of future air traffic environments potentially exceeding human operator capabilities, the process of integrating automated decision-making tools into the air traffic management system is underway. However, the current system is lacking a validated standard for measuring a critical element of the human-automation partnership-trust. Specifically needed is a valid scale appropriate for and tested on air traffic controllers. To remedy this issue, a two-phase modification of the Human Automation Trust Scale by Jian, Bisantz and Drury (2000) was deployed at the Airspace Operations Laboratory at NASA Ames during 2013. Applied to two different human-in-the-loop experiments, the following results include a scale that supports understanding underlying trust attitude in air traffic controllers while maintaining a high inter-item reliability score. The use of this assessment method when testing new air traffic management tools can assist in understanding potential pitfalls for tool use and implementation. INTRODUCTION The next generation of air traffic control in the United States (NextGen) is evolving into an integrated humanautomation environment, with automation generated information becoming a critical contributor to decisionmaking (Joint Planning and Development Office, 2012). As this integration continues, it becomes increasingly necessary to appropriately assess the human-automation relationship as it pertains to decision-making and safety critical tasks. Underlying trust is one component that has been identified as a key (though not sole) contributor to intent to use and actual usage of an automated system (Lee & Moray, 1994). As such, developing a reliable method for measuring underlying trust is necessary in the assessment of human-automation integration. In the current air traffic control domain where most systems are very safe and the goal is near-perfect performance i.e., delivering aircraft safely and on time, the general underlying trust attitude (Lee & See, 2004) of a controller in regards to an automated system is of as much concern as their ability to detect envrionmentally (such as an incorrect weather forecast) induced inaccuracies / unreliabilities requiring a change in automated tool use by a controller (Kirlik,1993). As such, measuring actual general underlying trust of controllers in an automated system necessitates a method which is not highly sensitive to direct experimental manipulation of envrionmental factors impacting the current accuracy of the automation, but instead responds to their underlying attitude, knowledge in and trust of the system. To this end, in 2013, researchers in the Airspace Operations Laboratory (AOL) at NASA Ames employed modified versions of the Human-Automation Trust Scale (HAT) (Jian, Bisantz, & Drury, 2000) during two human-inthe-loop simulations to assess its efficacy in an air traffic management (ATM) environment. HAT was selected specifically for its empirically based assessment method, and its track record of use in different automated domai...