During the last decades, digital competence has become essential at workplace. Nowadays, it is difficult to find a job where no ICT skills are required. At the same time, there is a lack of ecosystems for adult reskilling in digital competence. Moreover, most of them do not use of a common language and terminology, decreasing the possibilities of being used by a wider public. In addition, the assessment of digital competence cannot be done using simple self-assessment tests, but more complex tools such as simulations or other activities based on real scenarios. Considering this, we designed a performance-based evaluation system following a pragmatic approach based on DigComp framework. We carried out a needs analysis based on expert consultation (63 teleworkers and 82 entrepreneurs) to create an assessment syllabus and implement the assessment modules. Then, we conducted an analysis by experts (n = 21) of the relationship between the content of the tests and the construct it was intended to measure. After refinement, the system was piloted by end-users all over Europe (n = 525). Results confirmed that DigComp was the most appropriate reference when considering the transversality of digital competence, providing researchers with well-defined clear criteria.
Supplementary Information
The online version contains supplementary material available at 10.1007/s10758-021-09516-3.
Until recently, most of the digital literacy frameworks have been based on assessment frameworks used by commercial entities. The release of the DigComp framework has allowed the development of tailored implementations for the evaluation of digital competence. However, the majority of these digital literacy frameworks are based on self-assessments, measuring only low-order cognitive skills. This paper reports on a study to develop and validate an assessment instrument, including interactive simulations to assess citizens’ digital competence. These formats are particularly important for the evaluation of complex cognitive constructs such as digital competence. Additionally, we selected two different approaches for designing the tests based on their scope, at the competence or competence area level. Their overall and dimensional validity and reliability were analysed. We summarise the issues addressed in each phase and key points to consider in new implementations. For both approaches, items present satisfactory difficulty and discrimination indicators. Validity was ensured through expert validation, and the Rasch analysis revealed good EAP/PV reliabilities. Therefore, the tests have sound psychometric properties that make them reliable and valid instruments for measuring digital competence. This paper contributes to an increasing number of tools designed to evaluate digital competence and highlights the necessity of measuring higher-order cognitive skills.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.