In this paper, we extend our previous research results regarding the stabilization of recurrent neural networks from the concept of input-to-state stability to noise-to-state stability, and present a new approach to achieve noise-to-state stabilization in probability for stochastic recurrent neural networks driven by the noise of unknown covariance. This approach is developed by using the Lyapunov technique, inverse optimality, differential game theory, and the Hamilton-Jacobi-Isaacs equation. Numerical examples demonstrate the effectiveness of the proposed approach.