Big volumes of data are collected and analyzed by LHC experiments at CERN. The success of this scientific challenges is ensured by a great amount of computing power and storage capacity, operated over high-performance networks, in very complex computing models on the LHC computing grid infrastructure. Now in run-2 data taking, LHC has an ambitious and broad experimental programme for the coming decades: it includes large investments in detector hardware, and similarly it requires commensurate investment in the R&D in software and computing to acquire, manage, process and analyze the shear amounts of data to be recorded in the high-luminosity LHC (HL-LHC) era. The new rise of artificial intelligence-related to the current big data era, to the technological progress and to a bump in resources democratization and efficient allocation at affordable costs through cloud solutions-is posing new challenges but also offering extremely promising techniques, not only for the commercial world but also for scientific enterprises such as HEP experiments. Machine learning and deep learning are rapidly evolving approaches to characterising and describing data with the potential to radically change how data is reduced and analyzed, also at LHC. This work aims at contributing to the construction of a machine learning "as a service" solution for CMS physics needs, namely an end-to-end data-service to serve machine learning trained model to the CMS software framework. To this ambitious goal, this work contributes firstly with a proof of concept of a first prototype of such infrastructure, and secondly with a specific physics use-case: the signal versus background discrimination in the study of CMS all-hadronic top quark decays, done with scalable machine learning techniques.