The offer of the same course simultaneously for thousands of students, whether in classroom teaching or in distance learning, requires standardization to compare performance between classes and to evaluate the success of the teaching–learning processes. This can be made easier through digital technologies. This study aims to present an Evaluation Process (EP) designed for an Introduction to Programming course (IP) for students of the Bachelor's Degree in Science and Technology (Engineering Education) at the Federal University of ABC. Initially, the course scenario is presented in the classroom modality, Face‐to‐Face (IP‐FF), offered to about 2,000 students every year, identifying the problems, and then, describing the EP designed and applied in Blended Learning classes (IP‐BL). This includes the use of software tools developed for this purpose. The two scenarios (IP‐FF and IP‐BL) are compared using data from all the classes of the previous 9 years.
The 2020 pandemic imposed new demands on teaching practices to support student's distance learning process. In this context, automated assessment (AA) is a pivotal resource that offers immediate and automatic feedback on students' programming tasks. Although the literature provides several contributions regarding AA of Programming Exercises (PEs), very few works discuss the automatic generation of personalized PE. This study reports our experience in applying a new proposal for AA‐PE in an Introduction to Programming (IP) course for a large group of students. This proposal's key feature is the ability to apply AA‐PE and parameterized unified exams to different programming languages by using the open‐source tools MCTest and Moodle (with virtual programming lab [VPL] plugin). During the first quarter of 2019, teachers of 19 of 44 IP‐FF (face‐to‐face) classes embraced our approach as a component in their pedagogical intervention. These classes achieved a higher pass rate (67.5%) than those that did not adopt our AA solution (59.1%), whereas the standard deviation was quite the same (22.5% and 21.3%, respectively). Additionally, preliminary results revealed a strong linear correlation (r = .93) between the pass rate and the average grade of the AA‐PE. In IP‐BL (blended learning), two classes used our method in the exams, with 171 students and a pass rate of 70.4%. These results corroborate previous works that continuous assessment combined with immediate feedback can contribute to students' learning process.
Several studies have indicated that mobilizing cognitive and metacognitive strategies can improve student performance. This article describes criteria that support the use of these strategies to develop programming logic, and presents a checklist that relates them with resources and tools that can help the teacher to promote their uses in virtual learning environments. Criteria, resources and tools have been specified to support the use of these strategies. A learning situation was conducted, and later evaluated by students and teachers, corroborating the validation of these strategies and criteria and the necessity of their use in computational environments.
Resumo. Vários estudos têm indicado que mobilizar estratégias cognitivas emetacognitivas pode melhorar o desempenho dos alunos. Este artigo descreve critérios que apoiam o uso dessas estratégias no desenvolvimento da lógica de programação e apresenta um checklist que as relaciona com recursos e ferramentas que podem auxiliar o professor a promover seus usos em ambientes virtuais de aprendizagem. Critérios, recursos e ferramentas foram especificados para apoiar o uso dessas estratégias. Uma situação de aprendizagem foi conduzida e, posteriormente, avaliada corroborando para a validação da viabilidade da aplicação desses critérios e a necessidade de implementação das estratégias em ambientes computacionais.
The generation of individualized exams can contribute to a more reliable assessment of the students. Manually performing this procedure may not be feasible, even more on a large scale. An alternative to deal with it is the automatic generation of questions. This paper discusses an innovative solution to simplify test generation and correction through parameterized questions in the context of a four-month Introduction to Programming course under a blended- learning (IP-BL) approach. It combines the open-source tool MCTest with Moodle and VPL plugin to generate and also automatically evaluate parameterized programming language questions. We applied an intervention based on this solution in two IP-BL groups (a total of 171 enrolled students) using Java.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.