“…In this case it is also true that V α (γ, ϕ) ≤ V α (γ, ϕ) for a Markov policy ϕ satisfying (4.6), and the equality takes place if P ϕ γ (t, X) = 1 for all t ∈ R + ; see [10,Theorem 5]. As shown in [10, Example 1], this may not be true if the cost function C also depends on an action chosen at jump epochs.…”