Current research does not resolve how people will judge harmful activist investments if machines (machine learning algorithms) are involved in the investment process as advisors, and not the ones “pulling the trigger”. On the one hand, machines might diffuse responsibility for making a socially responsible, but harmful investment. On the other hand, machines could exacerbate the blame that is assigned to the investment fund, which can be penalized for outsourcing part of the decision process to an algorithm. We attempted to resolve this issue experimentally. In our experiment (N = 956), participants judge either a human research team or a machine learning algorithm as the source of advice to the investment team to short-sell a company that they suspect of committing financial fraud. Results suggest that investment funds will be similarly blameworthy for an error regardless of using human or machine intelligence to support their decision to short-sell a company. This finding highlights a novel and relevant circumstance in which reliance on algorithms does not backfire by making the final decision-maker (e.g., an investment fund) more blameworthy. Nor does it lessen the perceived blameworthiness of the final decision-maker, by making algorithms into “electronic scapegoats” for providing well-intended but harmful advice.