MHD turbulence is likely to play an important role in several astrophysical scenarios where the magnetic Reynolds is very large. Numerically, these cases can be studied efficiently by means of Large Eddy Simulations, in which the computational resources are used to evolve the system only up to a finite grid size. The resolution is not fine enough to capture all the relevant small-scale physics at play, which is instead effectively modeled by a set of additional terms in the evolution equations, dubbed as sub-grid-scale model. Here we extend such approach, commonly used in nonrelativistic/non-magnetic/incompressible fluid dynamics, applying the so-called gradient model to a general set of balance-law equations, that includes the relevant case in which a non-trivial inversion of conserved to primitive fields is needed. In particular, we focus on the relativistic compressible ideal MHD scenario, providing for the first time (and for any equation of state) all the additional subgrid-scale terms. As an application, we consider box simulations of the relativistic Kelvin-Helmholtz instability, which is also the first mechanism responsible for the magnetic field amplification in binary neutron star mergers and cannot yet be fully captured by the finest-grid and longest simulations available. The performance of our model is numerically assessed by comparing it to the residuals arising from the filtering of high-resolution simulations. We find that the model can fit very well those residuals from resolutions a few times higher. Although the application shown here explicitly considers the Minkowski metric, it can be directly extended to general relativity, thus settling the basis to implement the gradient sub-grid model in a GRMHD binary merger. Our results suggest that this approach will be potentially able to unveil much better the small-scale dynamics achievable in full GRMHD simulations.