Artificial attention models have been proposed to simulate human attentional behavior for the purpose to predict such or to endow technical systems with the ability to filter relevant from irrelevant information in visual scenes. Such models are typically based on the concept of saliency, which reflects the conspicuity of a visual entity regarding features such as color, intensity or orientation. Besides these stimulus-driven processes, a lot of effort has been made to enhance the models with top-down influences, which are known to govern human attentional behavior. Mostly, this aspect is considered in the form of specific targets for visual search or very general characteristics as the gist of a scene. For human attention it has been shown that objects which afford actions-such as graspable items in the action space-attract attention. Here we show that an artificial attention model that estimates such affordances can better predict human performance in a change detection task than a classic bottom-up saliency model. The implications are twofold: (1) The results add further evidence that human attention is highly influenced by affordances, which we can objectively model and compare to an objective control based on visual saliency. (2) The integration of affordance estimation into technical attention systems provides a top-down influence which is not overly specific or general but guides attention to objects which are potential targets of actions and with respect to the physical capabilities of the system; this is an advancement for technical cognitive systems.