“…For humans that are assisted by AI, it is therefore essential to be able to identify strengths and weaknesses of the AI system (i.e., in which cases it is correct and in which wrong, see [9]). In this setting, latest research distinguishes three cases of reliance behavior: (i) relying on AI recommendations in too few cases (i.e., under-reliance, see [10,11], e.g., by underestimating AI performance), (ii) relying on AI recommendations in too many cases (i.e., over-reliance, see [1,12,13], e.g., by overestimating AI performance), and (iii) relying appropriately on AI recommendations (i.e., adhering to AI recommendations when correct and overriding when wrong, see [5,9,14]). Thus far, research has identified many scenarios in which underreliance or over-reliance results in reduced decision-making performance (e.g., [12,15]).…”