Currently, there are many online platforms that offers programming exercise libraries where evaluation occurs automatically. The present work presents an analysis of two models that aims to estimate the students' ability: ELO and TRI Theory. ELO was developed to classify players through game history, and TRI estimates skill through a set of responses given to a set of items. For the application of the models we use a database made available by an Online Judge platform. The results show us differences between the models in relation to the estimated abilities, differences that we believe are related to the way in which each model estimates the parameters.
Research indicates human being is endowed with multiple intelligences, skills, and abilities. In the context of education, many exercises demand multiple skills from students to successfully solve them in different areas of knowledge. In computer science, computer programming is one of the skills that involves the use of multiple skills for problem-solving, where problems can be solved in more than one way (paths). On massive environments for teaching programming, it is common for automatic assessment systems to observe only the final result of the student's interaction with the learning object, not identifying the individual interaction of multiple skills needed to solve the problem nor identifying a solution path adopted by the student. Many models were proposed based on Elo models, which use performance expectation, and Item Response Theory, but these models do not consider the various paths to solve problems. The objective of this work is to propose a model also based on performance expectation, which individually estimates multiple abilities of students in the context of massive online education, assuming problems have more than one solution, and there is access only to the final result (right or wrong). An experimental setup is proposed to validate the model, involving the use and analysis of the proposed model through an experiment in a database, named beecrowd, and a case study with programming students. Model results are satisfactory, since: i) it is possible to treat the student's abilities individually, as well as to follow the evolution of each ability over time; ii) it is possible to predict the paths adopted by them according to the student's abilities; iii) the model also shows positive results when integrated with a recommendation system, recommending problems compatible with the student's abilities.
É crescente o número de plataformas online que disponibilizam exercícios de programação, onde os estudantes submetem a resolução destes exercícios e recebem um feedback automático do sistema, sem intervenção humana. Esses ambientes permitem o registro de muitos aspectos das submissões e, dessa forma, os modelos de avaliação educacional podem ser utilizados para inferir as habilidades trabalhadas em cada solução. Neste trabalho apresentamos uma análise comparativa de três modelos que estimam a habilidade dos estudantes: Elo, Teoria de Resposta ao Item (TRI) e M-ERS (Multidimensional Extension of the ERS). O Elo foi desenvolvido para classificar jogadores de xadrez, através do histórico de jogo, mas foi adaptado para estimar a habilidade dos estudantes através do histórico de submissões dos problemas. A TRI estima a habilidade através de um conjunto de respostas dadas a um conjunto de itens, existem alguns modelos de TRI que variam de acordo com o tipo de resposta. M-ERS é uma adaptação do Elo e TRI que combina os dois modelos e rastreia as múltiplas habilidades dos estudantes. Os modelos Elo, TRI de 2 parâmetros, TRI de resposta gradual e o M-ERS foram aplicados em uma base de dados disponibilizada por uma plataforma Online Judge. Os resultados obtidos apontam diferenças entre os modelos em relação às habilidades estimadas, diferenças que acredita-se estar relacionadas à forma com que cada modelo estima os parâmetros.
A crescente demanda de profissionais capazes de elaborar e manter softwares acarreta tambem no aumento de cursos voltados ao ensino de programação. O presente trabalho propõe um modelo de avaliação de códigos fonte produzidos em cursos de programação. O modelo é baseado em TF-IDF para identificar e estimar habilidades do pensamento computacional em códigos fonte. Os resultados demostram ser uma abordagem de avaliação promissora na comparação de habilidades em diferentes fontes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.