Model-driven optimization allows to directly apply domain-specific modeling languages to define models which are subsequently optimized by applying a predefined set of model transformation rules. Objectives guide the optimization processes which can range from one single objective formulation resulting in one single solution to a set of objectives that necessitates the identification of a Pareto-optimal set of solutions. In recent years, a multitude of reinforcement learning approaches has been proposed that support both optimization cases and competitive results for various problem instances have been reported. However, their application to the field of model-driven optimization has not gained much attention yet, especially when compared to the extensive application of meta-heuristic search approaches such as genetic algorithms. Thus, there is a lack of knowledge about the applicability and performance of reinforcement learning for model-driven optimization. We therefore present in this paper a general framework for applying reinforcement learning to model-driven optimization problems. In particular, we show how a catalog of different reinforcement learning algorithms can be integrated with existing model-driven optimization approaches that use a transformation rule application encoding. We exemplify this integration by presenting a dedicated reinforcement learning extension for MOMoT. We build on this tool support and investigate several case studies for validating the applicability of reinforcement learning for model-driven optimization and compare the performance against a genetic algorithm. The results show clear advantages of using RL for single-objective problems, especially for cases where the transformation steps are highly dependent on each other. For multi-objective problems, the results are more diverse and case-specific, which further motivates the usage of model-driven optimization to utilize different approaches to find the best solutions.