Hyperparameter Optimization of neural networks is a computationally expensive procedure, which requires a large number of different model configurations to be trained. To reduce such costs, this work presents a distributed, hybrid workflow, that runs the training of the neural networks on multiple Graphics Processing Units (GPUs) on a classical supercomputer, while predicting the configurations' performance with Quantum Support Vector Regression on a Quantum Annealer (QA). The workflow is shown to run on up to 50 GPUs and a QA at the same time, completely automating the communication between the classical and the quantum system. The approach is evaluated extensively on several benchmarking datasets from the Computer Vision, High Energy Physics, and Natural Language Processing domains. Results show that resource savings of up to approximately 9% can be achieved while obtaining similar, and in some cases even better, accuracy, highlighting the potential of hybrid quantum-classical machine learning algorithms. The workflow code is made available open-source to foster adoption in the community.