We present a program visualization tool called Jeliot 3 that is designed to aid novice students to learn procedural and object oriented programming. The key feature of Jeliot is the fully or semi-automatic visualization of the data and control flows. The development process of Jeliot has been research-oriented, meaning that all the different versions have had their own research agenda rising from the design of the previous version and their empirical evaluations. In this process, the user interface and visualization has evolved to better suit the targeted audience, which in the case of Jeliot 3, is novice programmers. In this paper we explain the model for the system and introduce the features of the user interface and visualization engine. Moreover, we have developed an intermediate language that is used to decouple the interpretation of the program from its visualization. This has led to a modular design that permits both internal and external extensibility.
We analyze the Computing Education Research (CER) literature to discover what theories, conceptual models and frameworks recent CER builds on. This gives rise to a broad understanding of the theoretical basis of CER that is useful for researchers working in that area, and has the potential to help CER develop its own identity as an independent field of study.Our analysis takes in seven years of publications (2005)(2006)(2007)(2008)(2009)(2010)(2011) 308 papers) in three venues that publish long research papers in computing education: the journals ACM Transactions of Computing Education (TOCE) and Computer Science Education (CSEd), and the conference International Computing Education Research Workshop (ICER). We looked at the theoretical background works that are used or extended in the papers, not just referred to when describing related work. These background works include theories, conceptual models and frameworks. For each background work we tried to identify the discipline from which it originates, to gain an understanding of how CER relates to its neighboring fields. We also identified theoretical works originating within CER itself, showing that the field is building on its own theoretical works.Our main findings are that there is a great richness of work on which recent CER papers build; there are no prevailing theoretical or technical works that are broadly applied across CER; about half the analyzed papers build on no previous theoretical work, but a considerable share of these are building their own theoretical constructions. We discuss the significance of these findings for the whole field and conclude with some recommendations.
Probabilistic Latent Semantic Analysis (PLSA) is an information retrieval technique proposed to improve the problems found in Latent Semantic Analysis (LSA). We have applied both LSA and PLSA in our system for grading essays written in Finnish, called Automatic Essay Assessor (AEA). We report the results comparing PLSA and LSA with three essay sets from various subjects. The methods were found to be almost equal in the accuracy measured by Spearman correlation between the grades given by the system and a human. Furthermore, we propose methods for improving the usage of PLSA in essay grading.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.