We study optimal control problems in infinite horizon when the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in [35] to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov's "shaking the coefficients" method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product, the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton-Jacobi integrodifferential system. This ensures that the value function satisfies Perron's preconization for the (unique) candidate to viscosity solution.Mathematics Subject Classification. 49L25, 93E20, 60J25, 49L20 Acknowledgement. The authors would like to thank the anonymous referees for constructive remarks allowing to improve the manuscript. v δ (x, γ) := inf α,X x,γ,α · ∈network E ∞ 0 e −δt l Γ x,γ,α t (X x,γ,α t , α t ) dt . Tel. : +33 (0)1 60 95 75 27, Fax : +33 (0)1 60 95 75 45 ‡ Acknowledgement. )) ds ,where α 2 ∈ L 0 (R + × R m × E; A). We set (X x 0 ,γ 0 ,α t , Γ x 0 ,γ 0 ,α t ) = (y Υ 1 (t; τ 1 , Y 1 , α 2 ) , Υ 1 ) , if t ∈ [τ 1 , τ 2 ) .The post-jump location (Y 2 , Υ 2 ) satisfies