actuator instead of hydraulic), to behaviors (e.g. a structural element doubling as electric conductor), and to entire systems (e.g. a fixed-wing aircraft as opposed to a helicopter). Previously, a review on technology selection [1] identified the following challenges with technology evaluation and selection: 1) The application of design tools is limited to a specific vehicle type, because the sizing method is fixed [2] 2) Flexibility and scalability of disciplinary analysis tools is lacking [2] 3) Parameterization of geometry proves challenging in a generalized sense (i.e. problem specific parameterizations are possible, but geometry parameterization that holds for any problem is challenging, at the least) 4) Modeling of technologies is usually avoided and replaced by impact factors 5) Extensive use of expert judgment raises challenges due to subjectivity, conservatism, overconfidence and lack of experts Additionally, from discussions with practitioners at Saab, the following challenges with technology selection in the conceptual design phase were identified: (i) non-performance metrics (e.g. the effect of improved human-machine interaction) need to be included, although prove difficult to define and quantify, (ii) technology descriptions are not yet meaningful, and cannot traverse from detailed to high-level descriptions, nor capture quantifiable effects and enabling fair comparison between technologies, (iii) finding the best technology portfolio without enumerating all possible combinations cannot yet be done objectively, (iv) the assessment of dependencies between technologies is too subjective, and (v) when uncertainty is quantified, decision making is impaired in case of wide uncertainty bands. Several conclusions can be drawn from these challenges. Metrics that are not quantifiable cannot be addressed, and hence, technologies affecting those have no meaning. Alternatively, additional analysis methods should be developed. Uncertainty should be associated with technology (and system) readiness level (both on impact and development time). Additionally, the amount of uncertainty should be limited or decision-making in the event of large uncertainty should be supported. Insight is more important at this stage than arriving at a most optimal result, hence preference is given to evaluating current possibilities, rather than performing optimization. Therefore, a structured and traceable solution to technology definition, evaluation and selection is sought. Conventionally, technology selection is performed by collecting technology TRL and IRL levels, describing their effects in an impact matrix and their compatibility in a Technology Compatibility Matrix (TCM). Such an approach was taken in the works by Amadori et al. [3, 4], who depict the process as in Figure 1. The data collection phase is essentially carried out completely using expert judgment, with limited traceability and objectivity.