Feature attribution methods (AMs) are a simple means to provide explanations for the predictions of black-box models like neural networks. Due to their conceptual differences, the numerous different methods, however, yield ambiguous explanations. While this allows for obtaining different insights into the model, it also complicates the decision which method to adopt. This paper, therefore, summarizes the current state of the art regarding AMs, which includes the requirements and desiderata of the methods themselves as well as the properties of their explanations. Based on a survey of existing methods, a representative subset consisting of the
δ
-sensitivity index, permutation feature importance, variance-based feature importance in artificial neural networks and DeepSHAP, is described in greater detail and, for the first time, benchmarked in a regression context. Specifically for this purpose, a new verification strategy for model-specific AMs is proposed. As expected, the explanations’ agreement with the intuition and among each other clearly depends on the AMs’ properties. This has two implications: First, careful reasoning about the selection of an AM is required. Secondly, it is recommended to apply multiple AMs and combine their insights in order to reduce the model’s opacity even further.