This paper analyzes the effects of a perceived transition from a rule-based computer programming paradigm to an example-based paradigm associated with machine learning. While both paradigms coexist in practice, we critically discuss the distinctive epistemological and ethical implications of machine learning's “exemplary” type of authority. To capture its logic, we compare it to computer programming rules that date to the middle of the 20th century, showing how rules and examples have regulated human conduct in significantly different ways. In contrast to the highly constructed, explicit, and prescriptive form of authority imposed by programming rules, machine learning models are trained using data that has been made into examples. These examples elicit norms in an implicit, emergent manner to make prediction and classification possible. We analyze three ways that examples are produced in machine learning: labeling, feature engineering, and scaling. We use the phrase “artificial naturalism” to characterize the tensions of this type of authority, in which examples sit ambiguously between data and norm.