This paper proposes an online identifier-critic learning framework for event-triggered optimal control of completely unknown nonlinear systems. Unlike classical adaptive dynamic programming (ADP) methods with actor-critic neural networks (NNs), a filter-regression-based approach is developed to reconstruct the unknown system dynamics, and thus avoid the dependence on an accurate system model in the control design loop. Meanwhile, NN adaptive laws are designed for the parameter estimation by using only the measured system state and input data, and facilitate the identifier-critic NN design. The convergence of the adaptive laws is analyzed. Furthermore, in order to reduce state sampling frequency, two kinds of aperiodic sampling schemes, namely static and dynamic event triggers, are embedded into the proposed optimal control design. Finally, simulation results are presented to demonstrate the effectiveness of the proposed event-triggered optimal control strategy.