Recent developments in Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs) have made them highly useful for various tasks. However, they both have their respective constraints that make them incapable of completing intricate tasks alone in many scenarios. For example, a UGV is unable to reach high places, while a UAV is limited by its power supply and payload capacity. In this paper, we propose an Imitation Augmented Deep Reinforcement Learning (IADRL) model that enables a UGV and UAV to form a coalition that is complementary and cooperative for completing tasks that they are incapable of achieving alone. IADRL learns the underlying complementary behaviors of UGVs and UAVs from a demonstration dataset that is collected from some simple scenarios with non-optimized strategies. Based on observations from the UGV and UAV, IADRL provides an optimized policy for the UGV-UAV coalition to work in an complementary way while minimizing the cost. We evaluate the IADRL approach in an visual game-based simulation platform, and conduct experiments that show how it effectively enables the coalition to cooperatively and cost-effectively accomplish tasks. INDEX TERMS Unmanned aerial vehicle (UAV), unmanned ground vehicle (UGV), coalition, deep reinforcement learning (DRL), imitation learning. JIAN ZHANG (Member, IEEE) received the B.Sc. and M.Sc. degrees in applied physics from Sichuan University, Chengdu, China, in 2001 and 2008, respectively, and the Ph.D. degree in electrical and computer engineering from Auburn University, Auburn, AL, USA, in 2016. He is currently an Assistant Research Professor with the RFID Laboratory, Auburn University. His main research interests include RFID technologies and applications, the Internet of Things, indoor localization, UAV, and collaborative robotics. His work focuses on improving the efficiency of supply chain management for industry and business.