Abstract-Consider an observer (reporter) who desires to inform optimally a distant agent regarding a physical stochastic process in the environment while the directed communication of the observer to the agent has a price. We define a metric, from a task oriented perspective, for the information transferred from the observer to the agent. We develop a framework for optimizing an augmented cost function which is a convex combination of the transferred information and the paid price over a finite horizon. We suppose that the decision making takes place inside a source encoder, and that the sampling schedule is the decision variable. Moreover, we assume that no measurement at the current time is available to the observer for the decision making. We derive the optimal self-driven sampling policy using dynamic programming, and we show that this policy corresponds to a self-driven sampling policy based on a quantity that is in fact the value of information at each time instant. In addition, we use a semi-definite programming relaxation to provide a suboptimal sampling policy. Numerical and simulation results are presented for a simple unstable system.