This paper studies constrained Markov decision processes under the total expected discounted cost optimality criterion, with a state-action dependent discount factor that may take any value between zero and one. Both the state and the action space are assumed to be Borel spaces. By using the linear programming approach, consisting in stating the control problem as a linear problem on a set of occupation measures, we show the existence of an optimal stationary Markov policy. Our results are based on the study of both weak-strong topologies in the space of occupation measures and Young measures in the space of Markov policies.