“…In a stationary setting, agents' optimal policy functions solve the following Bellman equation for t = 1, 2, 3, … : Note that this function does not depend on ξ t +1 because E [ U ( x i τ , s i τ , q τ ;θ) | { q i τ } τ , x t +1 , s t +1 , ξ t +1 ]= E [ U ( x i τ , s i τ , q τ ;θ) |{ q i τ } τ , x t +1 , s t +1 ] for all τ≥ t + 1. We have so agent i 's optimal policy can be expressed compactly as As in Hong and Shum (2004), the optimal policy function q ( x t , s t ; θ, γ) will be nondecreasing in s t conditional on x t if U ( x , s , q ; θ) is supermodular in ( q , s ) given x . This is a useful result because it enables us to recover s it by inverting conditional quantiles of q it given x it .…”