Solving sparse linear systems is a key task in a number of computational problems, such as data analysis and simulations, and majorly determines overall execution time. Choosing a suitable iterative solver algorithm, however, can significantly improve time-to-completion. We present a deep learning approach designed to predict the optimal iterative solver for a given sparse linear problem. For this, we detail useful linear system features to drive the prediction process, the metrics we use to quantify the iterative solvers' timeto-approximation performance and a comprehensive experimental evaluation of the prediction quality of the neural network. Using a hyperparameter optimization and an ablation study on the SuiteSparse matrix collection we have inferred the importance of distinct features, achieving a top-1 classification accuracy of 60%.