This article investigates the optimal control problem (OCP) for a class of discrete‐time nonlinear systems with state constraints. First, to overcome the challenge caused by the constraints, the original constrained OCP is transformed into an unconstrained OCP by utilizing the system transformation technique. Second, a new cost function is designed to alleviate the effect of system transformation on the optimality of the original system. Further, a novel off‐policy deterministic approximate dynamic programming (ADP) scheme is developed to obtain a near‐optimal solution for the transformed OCP. Compared to existing off‐policy deterministic ADP schemes, the developed scheme relaxes the requirement on the learning data and saves computing resources from the perspective of training neural networks. Third, considering approximation errors, we analyze the convergence and stability of the developed ADP scheme. Finally, the developed ADP with the designed cost function is tested in two numerical cases, and simulation results confirm its effectiveness.