Despite serious reservations over issues of transparency, accountability, bias, and the like, algorithms offer a potentially significant contribution to furthering human well-being via the influencing of beliefs, desires, and choices. Should governments be permitted to leverage socially beneficial attitudes, or enhance the well-being of their citizens via the use of algorithmic tools? In this chapter I argue that there are principled moral reasons that do not permit governments to shape the ends of individuals in this way, even when it would contribute a positive benefit to well-being. Such shaping would undermine the kinds of ethical independence that state legitimacy is based upon. However, I also argue that this does not apply to what Rawls calls a 'sense of justice' -the dispositions necessary to uphold just political and socioeconomic institutions. Where traditional methods of influence, such as education, prove lacking, then algorithmic enhancement towards those ends may be permissible. Mireille Hildebrandt's fictitious piece of computational software -'Toma' -serves as the point of departure for this argument, and provides many of the insights regarding the autonomic nature of such influence.