Monte-Carlo nuclear reaction and transport codes are widely used to devise accelerator-based nuclear physics experiments; at the same time, many experiments are performed to validate the Monte-Carlo codes, which can be used for the design of full-scale nuclear power applications or the design of new benchmark experiments. Dedicated model benchmark studies investigate a broad range of nuclear reactions and quantities. Examples of these include isotope formation or secondary particle fluxes that result from the interactions of GeV-range hadrons with monoisotopic targets, which can be used to assess the respective systematic uncertainty of models. Such benchmark studies, as well as many nuclear application experiments and simulations carried out by various groups over the last few decades, enable us to draw methodological lessons. In this work, model uncertainty determined based on available experimental data allow us to identify the effects of practitioner expertise as well as the design of codes (user access to micro-scale parameters) on the range of uncertainties. We found that in cases when simulations are performed by code developers or users that are very experienced in performing simulations, the model to experiment quantity ratios generally agree with the limits determined by dedicated benchmark studies. In other cases, the ratios generally tend to be either smaller (underestimation of model error) or larger (overestimation of model error). A plausible explanation of the aforementioned effects is suggested.