This article proposes three novel time‐varying policy iteration algorithms for finite‐horizon optimal control problem of continuous‐time affine nonlinear systems. We first propose a model‐based time‐varying policy iteration algorithm. The method considers time‐varying solutions to the Hamiltonian–Jacobi–Bellman equation for finite‐horizon optimal control. Based on this algorithm, value function approximation is applied to the Bellman equation by establishing neural networks with time‐varying weights. A novel update law for time‐varying weights is put forward based on the idea of iterative learning control, which obtains optimal solutions more efficiently compared to previous works. Considering that system models may be unknown in real applications, we propose a partially model‐free time‐varying policy iteration algorithm that applies integral reinforcement learning to acquiring the time‐varying value function. Moreover, analysis of convergence, stability, and optimality is provided for every algorithm. Finally, simulations for different cases are given to verify the convenience and effectiveness of the proposed algorithms.