Unmanned aerial vehicles (UAVs) are regarded as an emerging technology, which can be effectively utilized to perform the data collection tasks in the Internet of Things (IoT) networks. However, both the UAVs and the sensors in these networks are energylimited devices, which necessitates an energy efficient data collection procedure to ensure the network lifetime. In this paper, we propose a multi-UAV-assisted network, where the UAVs fly to the ground sensors and control the sensor's transmit power during the data collection time. Our goal is to minimize the total energy consumption of the UAVs and the sensors, which is needed to accomplish the data collection mission. We formulate this problem into three sub-problems of single UAV navigation, sensor power control as well as multi-UAV scheduling and model each part as a finite-horizon Markov Decision Process (MDP). We deploy deep reinforcement learning (DRL)-based frameworks to solve each part. Specifically, we use deep deterministic policy gradient (DDPG) method to generate the best trajectory for the UAVs in an obstacle-constraint environment, given its starting position and the target sensor. We also deploy DDPG to control the sensor's transmit power during data collection. To schedule activity plans for each UAV to visit the sensors, we propose a multi-agent deep Q-learning (DQL) approach by taking the total energy consumption of the UAVs on each path into account. Our simulations show that the UAVs can find a safe and optimal path for each of their trips. Continuous power control of the sensors achieves better performance over the fixed power approaches in terms of the total energy consumption during data collection. In addition, compared to the two commonly used baselines, our scheduling framework achieves better and near-optimal results.