In engineering practice, the assessment of sensitivity is utilized to detect influential parameters in order to facilitate subsequent numerical simulation techniques. As sensitivity analyses are preprocessing methods for sophisticated numerical simulation techniques, e.g. reliability based optimization procedures, their application is always linked to an increase of the computational expense. In result, it is reasonable to couple sensitivity analysis and artificial neural networks (ANN). Therefore, multi-faceted global sensitivity measures (GSM) may be formulated, taking advantage of different characteristics of the ANNs. Additionally, to take into account nonlinearities of the response surface, a new approach of sectional global sensitivity measures is introduced. Generally, the sensitivity can be determined with S i =Ŝ i / n P j=1Ŝ j . Thereby, S i denotes the sensitivity of interest andŜ i a characteristic of the function f : R n → R under investigation. This can be either the response f itself or the first partial derivative thereof ∂ i f .
Sensitivity analysis with artificial neural networksPerceiving an ANN not only as a surrogate model to approximate a functional relationship f , but rather as an versatile tool to reason the behavior of f , several characteristics of ANN can be capitalized for sensitivity analysis [4]. Principally, these are the data storage (to memorize the characteristic of f ), derivability (to determine partial derivatives analytically), the learning aptitude (to train a set of initial information), and efficient numerical evaluation. This enables to deduce weighting based (web), derivative based (deb), training based (trb) and variance based (vab) sensitivity measures [3,4].ANNs save their information within the synaptic weights w k j k ,j k +1 , which connect the neurons j k between the k-th and k + 1-th network layer [1]. Therefore, w k j k ,j k +1 provide all essential information about the trained input data -as expected the characteristic of f . Based on those synaptic weights, a global sensitivity measure sums the plain synaptic weightsŜ i =j s−1 ,1 , with N k the number of neurons in the k-th network layer, k ∈ {1, . . . , s}, and i indicating the respective input neuron. Generally, the treatment of positive and negative weights is handled controversial in the literature. In this approach, the absolute value of weights is applied, due to the fact that the total influence should be evaluated. As a matter of fact, the first layer of weights has the greatest influence on the output of the ANN. In consequence, concentrating on those weights offers a first good shot about the sensitivityŜ i = N 2 j=1 |w ij |. The accuracy suffers in comparison to the weight product, but the manageability is increased, especially in the presence of ANN with many hidden layers.Formulating sensitivity measures with the pure introduction of weights approximates the mode of operation of ANN just in a rough manner. In detail, the influence of the weights behind the neurons k ≥ 2, which accommodate activ...