The mathematical formulation of constitutive models to describe the path‐dependent, that is, inelastic, behavior of materials is a challenging task and has been a focus in mechanics research for several decades. There have been increased efforts to facilitate or automate this task through data‐driven techniques, impelled in particular by the recent revival of neural networks (NNs) in computational mechanics. However, it seems questionable to simply not consider fundamental findings of constitutive modeling originating from the last decades research within NN‐based approaches. Herein, we propose a comparative study on different feedforward and recurrent neural network architectures to model 1D small strain inelasticity. Within this study, we divide the models into three basic classes: black box NNs, NNs enforcing physics in a weak form, and NNs enforcing physics in a strong form. Thereby, the first class of networks can learn constitutive relations from data while the underlying physics are completely ignored, whereas the latter two are constructed such that they can account for fundamental physics, where special attention is paid to the second law of thermodynamics in this work. Conventional linear and nonlinear viscoelastic as well as elastoplastic models are used for training data generation and, later on, as reference. After training with random walk time sequences containing information on stress, strain, and—for some models—internal variables, the NN‐based models are compared to the reference solution, whereby interpolation and extrapolation are considered. Besides the quality of the stress prediction, the related free energy and dissipation rate are analyzed to evaluate the models. Overall, the presented study enables a clear recording of the advantages and disadvantages of different NN architectures to model inelasticity and gives guidance on how to train and apply these models.