This article proposes a model‐free framework to solve the optimal control problem with an infinite‐horizon performance function for nonlinear systems with input constraints. Specifically, two Physics‐Informed Neural Networks (PINNs) that incorporate the knowledge of the Lyapunov stability theorem and the convergence conditions of the policy iteration algorithm are utilized to approximate the value function and control policy, respectively. Then, a Reinforcement Learning (RL) algorithm that does not require any first‐principles or data‐driven models of nonlinear systems is developed to iteratively learn a nearly optimal control policy. Furthermore, we provide a rigorous theoretical analysis showing the conditions that ensure the stability of closed‐loop systems with the control policy learned by RL and guarantee the convergence of the iteration algorithm. Finally, the proposed Physics‐Informed Reinforcement Learning (PIRL) method is applied to a chemical process example to demonstrate its effectiveness.