This article focuses on the event-triggered optimized output feedback control problem for nonlinear strict-feedback systems. First, a fuzzy state observer is designed to estimate the unmeasurable states. Then, the fuzzy-based reinforcement learning is performed under critic-actor architecture to realize the optimized control. In addition, a novel event-triggered mechanism is developed for the system states to greatly economize communication resources. By means of the Lyapunov stability theory, it can be proved that all signals of the closed-loop system are bounded, and the Zeno behavior can be successfully avoided. Lastly, an inverted pendulum example is provided to confirm the effectiveness of the derived algorithm.