This paper investigates the problem of energy‐efficient learning control for unknown repetitive nonlinear discrete‐time systems. Traditional event‐triggered model‐free iterative learning control (ILC) relies on data‐based approximation models to construct the controller optimization criterion, which is susceptible to model identification errors and the curse of dimensionality. To mitigate this limitation, we propose a novel direct‐type high‐order ILC algorithm that includes online learning capabilities. The control output is derived by directly applying iterative dynamic linearization to an ideal virtual nonlinear learning controller, with learning gains being automatically calibrated in real‐time using a radial basis function neural network (RBFNN). Furthermore, this strategy integrates an adaptive, relative threshold‐based, event‐triggered protocol that is dynamically updated based on the trained neural weights and tracking errors. This approach offers significant advantages over existing strategies. Theoretical proofs demonstrate the convergence of learning gains and tracking errors, and the theoretical results are applied to the frequency regulation of active power impact loads on an experimental platform for steel industry microgrids, validating the effectiveness and applicability of our scheme.