Network slicing is a technique introduced by 3GPP to enable multi-tenant operation in 5G systems. However, the support of slicing at the air interface requires not only efficient optimization algorithms operating in real time but also its tight integration into the 5G control plane. In this paper, we first present a priority-based mechanism enabling defined performance isolation among slices competing for resources. Then, to speed up the resource arbitration process, we propose and compare several supervised machine learning (ML) techniques. We show how to embed the proposed approach into the ITU-T standardized ML architecture. The proposed ML enhancement is evaluated under realistic traffic conditions with respect to the performance criteria defined by GSMA while explicitly accounting for 5G millimeter wave channel conditions. Our results show that ML techniques are able to provide suitable approximations for the resource allocation process ensuring slice performance isolation, efficient resource use, and fairness. Among the considered algorithms, polynomial regressions show the best results outperforming the exact solution algorithm by 5-6 orders of magnitude in terms of execution time and both neural network and random forest algorithms in terms of accuracy (by 20-40 %), sensitiveness to workload variations and training sample size. Finally, ML algorithms are generally prone to service level agreements (SLA) violation under high load and time-varying channel conditions, implying that an SLA enforcement system is needed in ITU-T's 5G ML framework.