A natural language generation approach to support understanding and traceability of multi-dimensional preferential sensitivity analysis in multi-criteria decision making Abstract Multi-Criteria Decision Analysis (MCDA) enables decision makers (DM) and decision analysts (DA) to analyse and understand decision situations in a structured and formalised way. With the increasing complexity of decision support systems (DSSs), it becomes challenging for both expert and novice users to understand and interpret the model results. Natural language generation (NLG) techniques are used in various DSSs to cope with this challenge as they reduce the cognitive effort to achieve understanding of decision situations. However, NLG techniques in MCDA have so far mainly been developed for deterministic decision situations or one-dimensional sensitivity analyses. In this paper, a concept for the generation of textual explanations for a multi-dimensional preferential sensitivity analysis in MCDA is developed. The key contribution is a NLG approach that provides detailed explanations of the implications of preferential uncertainties in Multi-Attribute Value Theory (MAVT). It generates a report that assesses the influences of simultaneous or separate variations of inter-criteria and intra-criteria preferential parameters determined within the decision analysis. We explore the added value of the natural language report in an online survey. Our results show that the NLG approach is particularly beneficial for difficult interpretational tasks.
IntroductionWith the aim of enabling transparent and systematic support in complex decision situations, Multi-Criteria Decision Analysis (MCDA) represents a formalised framework for the analysis of different decision alternatives (Stewart, 1992;Geldermann et al., 2009). While such decision support approaches are aimed at providing guidance to decision makers (DMs), their increasing mathematical complexity often hinders a straightforward understanding and traceability on the part of the DMs. Consequently, a lot of cognitive effort is required in order to analyse, interpret and derive adequate implications from the obtained model results which is particularly challenging for novice users (Spiegelhalter and KnillJones, 1984;Henrion and Druzdzel, 1991;Gregor and Benbasat, 1999). DMs consider such models as a 'black box', so they mistrust or even reject them (Brans and Mareschal, 1994;Bell et al., 2003), which leads to a gap between available information on the one hand and processible information on the other hand.To compensate for this, further explanations of decision analysis results promote understanding of the decision situation and thus help to increase trust and acceptability of the 3 system (Greer et al., 1994;Greef and Neerincx, 1995;Dhaliwal and Benbasat, 1996;Gregor and Benbasat, 1999;Parikh et al., 2001;Geldermann, 2010). The use of natural language generation (NLG) techniques to generate such explanations automatically based on the model results has been proposed, for instance, by Pap...