PurposeThe purpose of this paper is to demonstrate the rationale, technical framework, content creation workflow and evaluation for a multilingual massive open online course (MOOC) to facilitate information literacy (IL) considering cultural aspects.Design/methodology/approachA good practice analysis built the basis for the technical and content framework. The evaluation approach consisted of three phases: first, the students were asked to fill out a short self-assessment questionnaire and a shortened adapted version of a standardized IL test. Second, they completed the full version of the IL MOOC. Third, they were asked to fill out the full version of a standardized IL test and a user experience questionnaire.FindingsThe results show that first the designed workflow was suitable in practice and led to the implementation of a full-grown MOOC. Second, the implementation itself provides implications for future projects developing multilingual educational resources. Third, the evaluation results show that participants achieved significantly higher results in a standardized IL test after attending the MOOC as mandatory coursework. Variations between the different student groups in the participating countries were observed. Fourth, self-motivation to complete the MOOC showed to be a challenge for students asked to attend the MOOC as nonmandatory out-of-classroom task. It seems that multilingual facilitation alone is not sufficient to increase active MOOC participation.Originality/valueThis paper presents an innovative approach of developing multilingual IL teaching resources and is one of the first works to evaluate the impact of an IL MOOC on learners' experience and learning outcomes in an international evaluation study.
Artificial Intelligence (AI) is adopted in many businesses. However, adoption lacks behind for use cases with regulatory or compliance requirements, as validation and auditing of AI is still unresolved. AI's opaqueness (i.e., "black box") makes the validation challenging for auditors.Explainable AI (XAI) is the proposed technical countermeasure that can support validation and auditing of AI. We developed an XAI based validation approach for AI in sensitive use cases that facilitates the understanding of the system's behaviour. We conducted a case study in pharmaceutical manufacturing where strict regulatory requirements are present. The validation approach and an XAI prototype were developed through multiple workshops and then tested and evaluated with interviews. Our approach proved suitable to collect the required evidence for a software validation, but requires additional efforts compared to a traditional software validation. AI validation is an iterative process and clear regulations and guidelines are needed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.