Abstract-Traditional supervised learning assumes that instances are described by observable attributes. The goal is to learn to predict the labels for unseen instances. In many real world applications the values of some attributes are not only observable, but can be proactively chosen by a decision maker. Furthermore, in some of such applications the decision maker is interested not only to generate accurate predictions, but to maximize the probability of the desired outcome. For example, a direct marketing manager can choose the color of an envelope (actionable attribute), in which the offer is sent to a client, hoping that the right choice will result in a positive response with a higher probability. We study how to learn to choose the value of an actionable attribute in order to maximize the probability of a desired outcome in supervised learning settings. We emphasize that not all instances are equally sensitive to change in actions. Accurate choice of an action is essential for those instances, which are on a borderline (e.g. do not have a strong opinion). We formulate three supervised learning approaches to select the value of an actionable attribute at an instance level. We focus the learning process to the borderline cases. The potential of the underlying ideas is demonstrated with synthetic examples and a case study with a real dataset.