The paper studies the ability possessed by recurrent neural networks to model dynamic systems when some relevant state variables are not measurable. Neural architectures based on virtual states-which naturally arise from a space state representation-are introduced and compared with the more traditional neural output error ones. Despite the evident potential model ability possessed by virtual state architectures we experimented that their performances strongly depend on the training efficiency. A novel validation criterion for neural output error architectures is suggested which allows to assess the neural network not only in terms of its approximation accuracy but also with respect to stability issues.