The use of artificial intelligence as an instrument to assist judges in determining sentences in criminal cases is an issue that gives rise to many theoretical challenges. The purpose of this article is to examine one of these challenges known as the “input problem.” This problem arises supposedly due to two reasons: that in order for an algorithm to be able to provide a sentence recommendation, it needs to be inputted with case specific information; and that the task of presenting an adequate picture of a crime often turns out to be highly complex. Even though this problem has been noted since the earliest attempts at developing sentencing support systems, almost no one has considered the ethical nature of this challenge. The aim of this article is to fill that void. First, it is shown that the input problem has been subject to somewhat different interpretations. Second, several possible answers as to when and why the problem constitutes an ethical challenge are considered. Third, a few suggestions are presented as to how undesirable implications of complexity at the input stage might be ameliorated by tailoring the way sentencing algorithms are developed and used in the work of criminal courts.