The edge to data center computing continuum is the aggregation of computing resources located anywhere between the network edge (e.g. close to 5G antennas), and servers in traditional data centers. Kubernetes is the de facto standard for container orchestration. It is very efficient in a data center environment, but it fails to give the same performance when adding edge resources. At the edge, resources are more limited, and networking conditions are changing over time.In this paper, we present a methodology that lowers the costs of running applications in the edge-to-cloud computing continuum. A cost-aware scheduler enables this optimization. We are also monitoring the Key Performance Indicators of the applications to ensure that cost optimizations do not impact negatively their Quality of Service. In addition, to ensure that performances are optimal even when users are moving, we introduce a background process that periodically checks if a better location is available for the application. To demonstrate the performance of our scheduling approach, we evaluate it on a vehicle cooperative perception use case, a representative 5G application.